DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/33] add support for host based flow table management
@ 2020-03-17 15:37 Venkat Duvvuru
  2020-03-17 15:37 ` [dpdk-dev] [PATCH 01/33] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
                   ` (33 more replies)
  0 siblings, 34 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:37 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This patchset introduces a new mechanism to allow host-memory based
flow table management. This should allow higher flow scalability
than what is currently supported. This new approach also defines a
new rte_flow parser, and mapper which currently supports basic packet
classification in receive path. The patchset uses a newly implemented
control-plane firmware interface which optimizes flow insertions and
deletions.

This is a baseline patchset with limited scale. Follow on patches will
add support for more protocol headers, rte_flow attributes, actions
and such.

Currently the code path is disabled by default and can be enabled
using the CONFIG_RTE_LIBRTE_BNXT_TRUFLOW config flag.

Ajit Kumar Khaparde (1):
  net/bnxt: add updated dpdk hsi structure

Farah Smith (2):
  net/bnxt: add tf core identifier support
  net/bnxt: add tf core table scope support

Kishore Padmanabha (8):
  net/bnxt: match rte flow items with flow template patterns
  net/bnxt: match rte flow actions with flow template actions
  net/bnxt: add support for rte flow item parsing
  net/bnxt: add support for rte flow action parsing
  net/bnxt: add support for rte flow create driver hook
  net/bnxt: add support for rte flow validate driver hook
  net/bnxt: add support for rte flow destroy driver hook
  net/bnxt: add support for rte flow flush driver hook

Michael Wildt (4):
  net/bnxt: add initial tf core session open
  net/bnxt: add initial tf core session close support
  net/bnxt: add tf core session sram functions
  net/bnxt: add resource manager functionality

Mike Baucom (5):
  net/bnxt: add helper functions for blob/regfile ops
  net/bnxt: add support to process action tables
  net/bnxt: add support to process key tables
  net/bnxt: add support to free key and action tables
  net/bnxt: add support to alloc and program key and act tbls

Pete Spreadborough (2):
  net/bnxt: add truflow message handlers
  net/bnxt: add EM/EEM functionality

Randy Schacher (1):
  net/bnxt: update hwrm prep to use ptr

Shahaji Bhosle (2):
  net/bnxt: add initial tf core resource mgmt support
  net/bnxt: add tf core TCAM support

Venkat Duvvuru (8):
  net/bnxt: fetch SVIF information from the firmware
  net/bnxt: fetch vnic info from DPDK port
  net/bnxt: add support for ULP session manager init
  net/bnxt: add support for ULP session manager cleanup
  net/bnxt: register tf rte flow ops
  net/bnxt: disable vector mode when BNXT TRUFLOW is enabled
  net/bnxt: add support for injecting mark into packet’s mbuf
  config: introduce BNXT TRUFLOW config flag

 config/common_base                              |    1 +
 drivers/net/bnxt/Makefile                       |   23 +
 drivers/net/bnxt/bnxt.h                         |   25 +-
 drivers/net/bnxt/bnxt_ethdev.c                  |   44 +
 drivers/net/bnxt/bnxt_hwrm.c                    |  323 +-
 drivers/net/bnxt/bnxt_hwrm.h                    |   19 +
 drivers/net/bnxt/bnxt_rxr.c                     |  156 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h          | 3786 ++++++++++++++++++++---
 drivers/net/bnxt/tf_core/bitalloc.c             |  364 +++
 drivers/net/bnxt/tf_core/bitalloc.h             |  119 +
 drivers/net/bnxt/tf_core/hwrm_tf.h              |  992 ++++++
 drivers/net/bnxt/tf_core/lookup3.h              |  161 +
 drivers/net/bnxt/tf_core/rand.c                 |   47 +
 drivers/net/bnxt/tf_core/rand.h                 |   36 +
 drivers/net/bnxt/tf_core/stack.c                |  107 +
 drivers/net/bnxt/tf_core/stack.h                |  107 +
 drivers/net/bnxt/tf_core/tf_core.c              |  659 ++++
 drivers/net/bnxt/tf_core/tf_core.h              | 1376 ++++++++
 drivers/net/bnxt/tf_core/tf_em.c                |  516 +++
 drivers/net/bnxt/tf_core/tf_em.h                |  117 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h   |  166 +
 drivers/net/bnxt/tf_core/tf_msg.c               | 1248 ++++++++
 drivers/net/bnxt/tf_core/tf_msg.h               |  256 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h        |   47 +
 drivers/net/bnxt/tf_core/tf_project.h           |   24 +
 drivers/net/bnxt/tf_core/tf_resources.h         |  542 ++++
 drivers/net/bnxt/tf_core/tf_rm.c                | 3297 ++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h                |  321 ++
 drivers/net/bnxt/tf_core/tf_session.h           |  300 ++
 drivers/net/bnxt/tf_core/tf_tbl.c               | 1836 +++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.h               |  126 +
 drivers/net/bnxt/tf_core/tfp.c                  |  163 +
 drivers/net/bnxt/tf_core/tfp.h                  |  188 ++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h        |   54 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c              |  695 +++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h              |  110 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c         |  303 ++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c           |  626 ++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h           |  156 +
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 1502 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   69 +
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  271 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  111 +
 drivers/net/bnxt/tf_ulp/ulp_matcher.c           |  188 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h           |   35 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c        | 1208 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h        |  203 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 1712 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       |  354 +++
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h |  133 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  266 ++
 drivers/net/bnxt/tf_ulp/ulp_utils.c             |  521 ++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h             |  279 ++
 53 files changed, 25794 insertions(+), 494 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 01/33] net/bnxt: add updated dpdk hsi structure
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
@ 2020-03-17 15:37 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 02/33] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
                   ` (32 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:37 UTC (permalink / raw)
  To: dev; +Cc: Ajit Kumar Khaparde, Randy Schacher

From: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>

- Add most recent bnxt dpdk header.
- HWRM version updated to 1.10.1.30

Signed-off-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 3786 +++++++++++++++++++++++++++++---
 1 file changed, 3436 insertions(+), 350 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index c2bae0f..cde96e7 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 2014-2019 Broadcom Inc.
+ * Copyright (c) 2014-2020 Broadcom Inc.
  * All rights reserved.
  *
  * DO NOT MODIFY!!! This file is automatically generated.
@@ -386,6 +386,8 @@ struct cmd_nums {
 	#define HWRM_PORT_PHY_MDIO_READ                   UINT32_C(0xb6)
 	#define HWRM_PORT_PHY_MDIO_BUS_ACQUIRE            UINT32_C(0xb7)
 	#define HWRM_PORT_PHY_MDIO_BUS_RELEASE            UINT32_C(0xb8)
+	#define HWRM_PORT_QSTATS_EXT_PFC_WD               UINT32_C(0xb9)
+	#define HWRM_PORT_ECN_QSTATS                      UINT32_C(0xba)
 	#define HWRM_FW_RESET                             UINT32_C(0xc0)
 	#define HWRM_FW_QSTATUS                           UINT32_C(0xc1)
 	#define HWRM_FW_HEALTH_CHECK                      UINT32_C(0xc2)
@@ -404,6 +406,8 @@ struct cmd_nums {
 	#define HWRM_FW_GET_STRUCTURED_DATA               UINT32_C(0xcb)
 	/* Experimental */
 	#define HWRM_FW_IPC_MAILBOX                       UINT32_C(0xcc)
+	#define HWRM_FW_ECN_CFG                           UINT32_C(0xcd)
+	#define HWRM_FW_ECN_QCFG                          UINT32_C(0xce)
 	#define HWRM_EXEC_FWD_RESP                        UINT32_C(0xd0)
 	#define HWRM_REJECT_FWD_RESP                      UINT32_C(0xd1)
 	#define HWRM_FWD_RESP                             UINT32_C(0xd2)
@@ -419,6 +423,7 @@ struct cmd_nums {
 	#define HWRM_TEMP_MONITOR_QUERY                   UINT32_C(0xe0)
 	#define HWRM_REG_POWER_QUERY                      UINT32_C(0xe1)
 	#define HWRM_CORE_FREQUENCY_QUERY                 UINT32_C(0xe2)
+	#define HWRM_REG_POWER_HISTOGRAM                  UINT32_C(0xe3)
 	#define HWRM_WOL_FILTER_ALLOC                     UINT32_C(0xf0)
 	#define HWRM_WOL_FILTER_FREE                      UINT32_C(0xf1)
 	#define HWRM_WOL_FILTER_QCFG                      UINT32_C(0xf2)
@@ -510,7 +515,7 @@ struct cmd_nums {
 	#define HWRM_CFA_EEM_OP                           UINT32_C(0x123)
 	/* Experimental */
 	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS              UINT32_C(0x124)
-	/* Experimental */
+	/* Experimental - DEPRECATED */
 	#define HWRM_CFA_TFLIB                            UINT32_C(0x125)
 	/* Engine CKV - Get the current allocation status of keys provisioned in the key vault. */
 	#define HWRM_ENGINE_CKV_STATUS                    UINT32_C(0x12e)
@@ -629,6 +634,56 @@ struct cmd_nums {
 	 * to the host test.
 	 */
 	#define HWRM_MFG_HDMA_TEST                        UINT32_C(0x209)
+	/* Tells the fw to program the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_WRITE                 UINT32_C(0x20a)
+	/* Tells the fw to read the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_READ                  UINT32_C(0x20b)
+	/* Experimental */
+	#define HWRM_TF                                   UINT32_C(0x2bc)
+	/* Experimental */
+	#define HWRM_TF_VERSION_GET                       UINT32_C(0x2bd)
+	/* Experimental */
+	#define HWRM_TF_SESSION_OPEN                      UINT32_C(0x2c6)
+	/* Experimental */
+	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
+	/* Experimental */
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	/* Experimental */
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	/* Experimental */
+	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
 	#define HWRM_DBG_READ_DIRECT                      UINT32_C(0xff10)
 	/* Experimental */
@@ -658,6 +713,8 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_HEADER                 UINT32_C(0xff1d)
 	/* Experimental */
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
+	/* Send driver debug information to firmware */
+	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -857,8 +914,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 6
-#define HWRM_VERSION_STR "1.10.1.6"
+#define HWRM_VERSION_RSVD 30
+#define HWRM_VERSION_STR "1.10.1.30"
 
 /****************
  * hwrm_ver_get *
@@ -1143,6 +1200,7 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_ADV_FLOW_MGNT_SUPPORTED \
 		UINT32_C(0x1000)
 	/*
+	 * Deprecated and replaced with cfa_truflow_supported.
 	 * If set to 1, the firmware is able to support TFLIB features.
 	 * If set to 0, then the firmware doesn’t support TFLIB features.
 	 * By default, this flag should be 0 for older version of core firmware.
@@ -1150,6 +1208,14 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TFLIB_SUPPORTED \
 		UINT32_C(0x2000)
 	/*
+	 * If set to 1, the firmware is able to support TruFlow features.
+	 * If set to 0, then the firmware doesn’t support TruFlow features.
+	 * By default, this flag should be 0 for older version of
+	 * core firmware.
+	 */
+	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TRUFLOW_SUPPORTED \
+		UINT32_C(0x4000)
+	/*
 	 * This field represents the major version of RoCE firmware.
 	 * A change in major version represents a major release.
 	 */
@@ -4508,10 +4574,16 @@ struct hwrm_async_event_cmpl {
 	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_EEM_CFG_CHANGE \
 		UINT32_C(0x3c)
-	/* TFLIB unique default VNIC Configuration Change */
+	/*
+	 * Deprecated.
+	 * TFLIB unique default VNIC Configuration Change
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_DEFAULT_VNIC_CHANGE \
 		UINT32_C(0x3d)
-	/* TFLIB unique link status changed */
+	/*
+	 * Deprecated.
+	 * TFLIB unique link status changed
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_LINK_STATUS_CHANGE \
 		UINT32_C(0x3e)
 	/*
@@ -4521,6 +4593,19 @@ struct hwrm_async_event_cmpl {
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_QUIESCE_DONE \
 		UINT32_C(0x3f)
 	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred. This event is used on crypto controllers
+	 * only.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	/*
+	 * An event signifying that a PFC WatchDog configuration
+	 * has changed on any port / cos.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	/*
 	 * A trace log message. This contains firmware trace logs string
 	 * embedded in the asynchronous message. This is an experimental
 	 * event, not meant for production use at this time.
@@ -6393,6 +6478,36 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x2)
 	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_LAST \
 		HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_ERROR
+	/* opaque is 8 b */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_MASK \
+		UINT32_C(0xff00)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_SFT \
+		8
+	/*
+	 * Additional information about internal hardware state related to
+	 * idle/quiesce state.  QUIESCE may succeed per quiesce_status
+	 * regardless of idle_state_flags.  If QUIESCE fails, the host may
+	 * inspect idle_state_flags to determine whether a retry is warranted.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_MASK \
+		UINT32_C(0xff0000)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_SFT \
+		16
+	/*
+	 * Failure to quiesce is caused by host not updating the NQ consumer
+	 * index.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_INCOMPLETE_NQ \
+		UINT32_C(0x10000)
+	/* Flag 1 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_1 \
+		UINT32_C(0x20000)
+	/* Flag 2 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_2 \
+		UINT32_C(0x40000)
+	/* Flag 3 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_3 \
+		UINT32_C(0x80000)
 	uint8_t	opaque_v;
 	/*
 	 * This value is written by the NIC such that it will be different
@@ -6414,6 +6529,152 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x1)
 } __attribute__((packed));
 
+/* hwrm_async_event_cmpl_deferred_response (size:128b/16B) */
+struct hwrm_async_event_cmpl_deferred_response {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_SFT             0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE
+	/* Event specific data */
+	uint32_t	event_data2;
+	/*
+	 * The PF's mailbox is clear to issue another command.
+	 * A command with this seq_id is still in progress
+	 * and will return a regualr HWRM completion when done.
+	 * 'event_data1' field, if non-zero, contains the estimated
+	 * execution time for the command.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_MASK \
+		UINT32_C(0xffff)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_SFT \
+		0
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Estimated remaining time of command execution in ms (if not zero) */
+	uint32_t	event_data1;
+} __attribute__((packed));
+
+/* hwrm_async_event_cmpl_pfc_watchdog_cfg_change (size:128b/16B) */
+struct hwrm_async_event_cmpl_pfc_watchdog_cfg_change {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_SFT \
+		0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/* PFC watchdog configuration change for given port/cos */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE
+	/* Event specific data */
+	uint32_t	event_data2;
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Event specific data */
+	uint32_t	event_data1;
+	/*
+	 * 1 in bit position X indicates PFC watchdog should
+	 * be on for COSX
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_MASK \
+		UINT32_C(0xff)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_SFT \
+		0
+	/* 1 means PFC WD for COS0 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS0 \
+		UINT32_C(0x1)
+	/* 1 means PFC WD for COS1 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS1 \
+		UINT32_C(0x2)
+	/* 1 means PFC WD for COS2 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS2 \
+		UINT32_C(0x4)
+	/* 1 means PFC WD for COS3 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS3 \
+		UINT32_C(0x8)
+	/* 1 means PFC WD for COS4 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS4 \
+		UINT32_C(0x10)
+	/* 1 means PFC WD for COS5 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS5 \
+		UINT32_C(0x20)
+	/* 1 means PFC WD for COS6 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS6 \
+		UINT32_C(0x40)
+	/* 1 means PFC WD for COS7 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS7 \
+		UINT32_C(0x80)
+	/* PORT ID */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_MASK \
+		UINT32_C(0xffff00)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_SFT \
+		8
+} __attribute__((packed));
+
 /* hwrm_async_event_cmpl_fw_trace_msg (size:128b/16B) */
 struct hwrm_async_event_cmpl_fw_trace_msg {
 	uint16_t	type;
@@ -7220,7 +7481,7 @@ struct hwrm_func_qcaps_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcaps_output (size:640b/80B) */
+/* hwrm_func_qcaps_output (size:704b/88B) */
 struct hwrm_func_qcaps_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -7441,6 +7702,33 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_NOTIFY_VF_DEF_VNIC_CHNG_SUPPORTED \
 		UINT32_C(0x4000000)
+	/* If set to 1, then the vlan acceleration for TX is disabled. */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED \
+		UINT32_C(0x8000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_COREDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_COREDUMP_CMD_SUPPORTED \
+		UINT32_C(0x10000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_CRASHDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_CRASHDUMP_CMD_SUPPORTED \
+		UINT32_C(0x20000000)
+	/*
+	 * If the query is for a VF, then this flag should be ignored.
+	 * If the query is for a PF and this flag is set to 1, then
+	 * the PF has the capability to support retrieval of
+	 * rx_port_stats_ext_pfc_wd statistics (supported by the PFC
+	 * WatchDog feature) via the hwrm_port_qstats_ext_pfc_wd command.
+	 * If this flag is set to 1, only that (supported) command should
+	 * be used for retrieval of PFC related statistics (rather than
+	 * hwrm_port_qstats_ext command, which could previously be used).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
+		UINT32_C(0x40000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7551,7 +7839,22 @@ struct hwrm_func_qcaps_output {
 	 * (max_tx_rings) to the function.
 	 */
 	uint16_t	max_sp_tx_rings;
-	uint8_t	unused_0;
+	uint8_t	unused_0[2];
+	uint32_t	flags_ext;
+	/*
+	 * If 1, the device can be configured to set the ECN bits in the
+	 * IP header of received packets if the receive queue length
+	 * exceeds a given threshold.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_MARK_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * If 1, the device can report the number of received packets
+	 * that it marked as having experienced congestion.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
+		UINT32_C(0x2)
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -7606,7 +7909,7 @@ struct hwrm_func_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcfg_output (size:704b/88B) */
+/* hwrm_func_qcfg_output (size:768b/96B) */
 struct hwrm_func_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -8016,7 +8319,17 @@ struct hwrm_func_qcfg_output {
 	 * this value to find out the doorbell page offset from the BAR.
 	 */
 	uint16_t	legacy_l2_db_size_kb;
-	uint8_t	unused_2[1];
+	uint16_t	svif_info;
+	/*
+	 * This field specifies the source virtual interface of the function being
+	 * queried. Drivers can use this to program svif field in the L2 context
+	 * table
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK      UINT32_C(0x7fff)
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_SFT       0
+	/* This field specifies whether svif is valid or not */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID     UINT32_C(0x8000)
+	uint8_t	unused_2[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -9862,8 +10175,12 @@ struct hwrm_func_backing_store_qcaps_output {
 	uint32_t	rsvd;
 	/* Reserved for future. */
 	uint16_t	rsvd1;
-	/* Reserved for future. */
-	uint8_t	rsvd2;
+	/*
+	 * Count of TQM fastpath rings to be used for allocating backing store.
+	 * Backing store configuration must be specified for each TQM ring from
+	 * this count in `backing_store_cfg`.
+	 */
+	uint8_t	tqm_fp_rings_count;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -12178,116 +12495,163 @@ struct hwrm_error_recovery_qcfg_output {
 	 * this much time after writing reset_reg_val in reset_reg.
 	 */
 	uint8_t	delay_after_reset[16];
-	uint8_t	unused_1[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field
-	 * is written last.
-	 */
-	uint8_t	valid;
-} __attribute__((packed));
-
-/***********************
- * hwrm_func_vlan_qcfg *
- ***********************/
-
-
-/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
-struct hwrm_func_vlan_qcfg_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
+	 * Error recovery counter.
+	 * Lower 2 bits indicates address space location and upper 30 bits
+	 * indicates actual address.
+	 * A value of 0xFFFF-FFFF indicates this register does not exist.
 	 */
-	uint16_t	target_id;
+	uint32_t	err_recovery_cnt_reg;
+	/* Lower 2 bits indicates address space location. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_MASK \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_SFT \
+		0
 	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * If value is 0, this register is located in PCIe config space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint64_t	resp_addr;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_PCIE_CFG \
+		UINT32_C(0x0)
 	/*
-	 * Function ID of the function that is being
-	 * configured.
-	 * If set to 0xFF... (All Fs), then the configuration is
-	 * for the requesting function.
+	 * If value is 1, this register is located in GRC address space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	fid;
-	uint8_t	unused_0[6];
-} __attribute__((packed));
-
-/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
-struct hwrm_func_vlan_qcfg_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint64_t	unused_0;
-	/* S-TAG VLAN identifier configured for the function. */
-	uint16_t	stag_vid;
-	/* S-TAG PCP value configured for the function. */
-	uint8_t	stag_pcp;
-	uint8_t	unused_1;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_GRC \
+		UINT32_C(0x1)
 	/*
-	 * S-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 2, this register is located in first BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	stag_tpid;
-	/* C-TAG VLAN identifier configured for the function. */
-	uint16_t	ctag_vid;
-	/* C-TAG PCP value configured for the function. */
-	uint8_t	ctag_pcp;
-	uint8_t	unused_2;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR0 \
+		UINT32_C(0x2)
 	/*
-	 * C-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 3, this register is located in second BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	ctag_tpid;
-	/* Future use. */
-	uint32_t	rsvd2;
-	/* Future use. */
-	uint32_t	rsvd3;
-	uint8_t	unused_3[3];
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1 \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_LAST \
+		HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1
+	/* Upper 30bits of the register address. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_MASK \
+		UINT32_C(0xfffffffc)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SFT \
+		2
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
 	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/**********************
- * hwrm_func_vlan_cfg *
- **********************/
+/***********************
+ * hwrm_func_vlan_qcfg *
+ ***********************/
 
 
-/* hwrm_func_vlan_cfg_input (size:384b/48B) */
-struct hwrm_func_vlan_cfg_input {
+/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
+struct hwrm_func_vlan_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being
+	 * configured.
+	 * If set to 0xFF... (All Fs), then the configuration is
+	 * for the requesting function.
+	 */
+	uint16_t	fid;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
+struct hwrm_func_vlan_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint64_t	unused_0;
+	/* S-TAG VLAN identifier configured for the function. */
+	uint16_t	stag_vid;
+	/* S-TAG PCP value configured for the function. */
+	uint8_t	stag_pcp;
+	uint8_t	unused_1;
+	/*
+	 * S-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	stag_tpid;
+	/* C-TAG VLAN identifier configured for the function. */
+	uint16_t	ctag_vid;
+	/* C-TAG PCP value configured for the function. */
+	uint8_t	ctag_pcp;
+	uint8_t	unused_2;
+	/*
+	 * C-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	ctag_tpid;
+	/* Future use. */
+	uint32_t	rsvd2;
+	/* Future use. */
+	uint32_t	rsvd3;
+	uint8_t	unused_3[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_func_vlan_cfg *
+ **********************/
+
+
+/* hwrm_func_vlan_cfg_input (size:384b/48B) */
+struct hwrm_func_vlan_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -14039,6 +14403,9 @@ struct hwrm_port_phy_qcfg_output {
 	/* Module is not inserted. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTINSERTED \
 		UINT32_C(0x4)
+	/* Module is powered down becuase of over current fault. */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_CURRENTFAULT \
+		UINT32_C(0x5)
 	/* Module status is not applicable. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTAPPLICABLE \
 		UINT32_C(0xff)
@@ -15010,7 +15377,7 @@ struct hwrm_port_mac_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_port_mac_qcfg_output (size:192b/24B) */
+/* hwrm_port_mac_qcfg_output (size:256b/32B) */
 struct hwrm_port_mac_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -15250,6 +15617,20 @@ struct hwrm_port_mac_qcfg_output {
 		UINT32_C(0xe0)
 	#define HWRM_PORT_MAC_QCFG_OUTPUT_COS_FIELD_CFG_DEFAULT_COS_SFT \
 		5
+	uint8_t	unused_1;
+	uint16_t	port_svif_info;
+	/*
+	 * This field specifies the source virtual interface of the port being
+	 * queried. Drivers can use this to program port svif field in the
+	 * L2 context table
+	 */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK \
+		UINT32_C(0x7fff)
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_SFT       0
+	/* This field specifies whether port_svif is valid or not */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID \
+		UINT32_C(0x8000)
+	uint8_t	unused_2[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -15322,17 +15703,17 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_DIRECT_ACCESS \
 		UINT32_C(0x1)
 	/*
-	 * When this bit is set to '1', the PTP information is accessible
-	 * via HWRM commands.
-	 */
-	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
-		UINT32_C(0x2)
-	/*
 	 * When this bit is set to '1', the device supports one-step
 	 * Tx timestamping.
 	 */
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_ONE_STEP_TX_TS \
 		UINT32_C(0x4)
+	/*
+	 * When this bit is set to '1', the PTP information is accessible
+	 * via HWRM commands.
+	 */
+	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
+		UINT32_C(0x8)
 	uint8_t	unused_0[3];
 	/* Offset of the PTP register for the lower 32 bits of timestamp for RX. */
 	uint32_t	rx_ts_reg_off_lower;
@@ -15375,7 +15756,7 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics Formats */
+/* Port Tx Statistics Format */
 /* tx_port_stats (size:3264b/408B) */
 struct tx_port_stats {
 	/* Total Number of 64 Bytes frames transmitted */
@@ -15516,7 +15897,7 @@ struct tx_port_stats {
 	uint64_t	tx_stat_error;
 } __attribute__((packed));
 
-/* Port Rx Statistics Formats */
+/* Port Rx Statistics Format */
 /* rx_port_stats (size:4224b/528B) */
 struct rx_port_stats {
 	/* Total Number of 64 Bytes frames received */
@@ -15806,7 +16187,7 @@ struct hwrm_port_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics extended Formats */
+/* Port Tx Statistics extended Format */
 /* tx_port_stats_ext (size:2048b/256B) */
 struct tx_port_stats_ext {
 	/* Total number of tx bytes count on cos queue 0 */
@@ -15875,7 +16256,7 @@ struct tx_port_stats_ext {
 	uint64_t	pfc_pri7_tx_transitions;
 } __attribute__((packed));
 
-/* Port Rx Statistics extended Formats */
+/* Port Rx Statistics extended Format */
 /* rx_port_stats_ext (size:3648b/456B) */
 struct rx_port_stats_ext {
 	/* Number of times link state changed to down */
@@ -15997,6 +16378,424 @@ struct rx_port_stats_ext {
 	uint64_t	rx_discard_packets_cos7;
 } __attribute__((packed));
 
+/*
+ * Port Rx Statistics extended PFC WatchDog Format.
+ * StormDetect and StormRevert event determination is based
+ * on an integration period and a percentage threshold.
+ * StormDetect event - when percentage of XOFF frames receieved
+ * within an integration period exceeds the configured threshold.
+ * StormRevert event - when percentage of XON frames received
+ * within an integration period exceeds the configured threshold.
+ * Actual number of XOFF/XON frames for the events to be triggered
+ * depends on both configured integration period and sampling rate.
+ * The statistics in this structure represent counts of specified
+ * events from the moment the feature (PFC WatchDog) is enabled via
+ * hwrm_queue_pfc_enable_cfg call.
+ */
+/* rx_port_stats_ext_pfc_wd (size:5120b/640B) */
+struct rx_port_stats_ext_pfc_wd {
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri0;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri1;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri2;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri3;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri4;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri5;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri6;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri7;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri0;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri1;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri2;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri3;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri4;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri5;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri6;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri7;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri0;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri1;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri2;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri3;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri4;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri5;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri6;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri7;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri0;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri1;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri2;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri3;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri4;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri5;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri6;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri7;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri0;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri1;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri2;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri3;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri4;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri5;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri6;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri0;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri1;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri2;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri3;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri4;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri5;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri6;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri7;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri0;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri1;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri2;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri3;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri4;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri5;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri6;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri7;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri0;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri1;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri2;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri3;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri4;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri5;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri6;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri7;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri0;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri1;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri2;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri3;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri4;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri5;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri6;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri0;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri1;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri2;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri3;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri4;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri5;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri6;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri7;
+} __attribute__((packed));
+
 /************************
  * hwrm_port_qstats_ext *
  ************************/
@@ -16090,6 +16889,83 @@ struct hwrm_port_qstats_ext_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/*******************************
+ * hwrm_port_qstats_ext_pfc_wd *
+ *******************************/
+
+
+/* hwrm_port_qstats_ext_pfc_wd_input (size:256b/32B) */
+struct hwrm_port_qstats_ext_pfc_wd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Port ID of port that is being queried. */
+	uint16_t	port_id;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * block in bytes
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	unused_0[4];
+	/*
+	 * This is the host address where
+	 * rx_port_stats_ext_pfc_wd will be stored
+	 */
+	uint64_t	pfc_wd_stat_host_addr;
+} __attribute__((packed));
+
+/* hwrm_port_qstats_ext_pfc_wd_output (size:128b/16B) */
+struct hwrm_port_qstats_ext_pfc_wd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * statistics block in bytes.
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	flags;
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+	uint8_t	unused_0[4];
+} __attribute__((packed));
+
 /*************************
  * hwrm_port_lpbk_qstats *
  *************************/
@@ -16168,6 +17044,91 @@ struct hwrm_port_lpbk_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/************************
+ * hwrm_port_ecn_qstats *
+ ************************/
+
+
+/* hwrm_port_ecn_qstats_input (size:192b/24B) */
+struct hwrm_port_ecn_qstats_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Port ID of port that is being queried. Unused if NIC is in
+	 * multi-host mode.
+	 */
+	uint16_t	port_id;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_port_ecn_qstats_output (size:384b/48B) */
+struct hwrm_port_ecn_qstats_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of packets marked in CoS queue 0. */
+	uint32_t	mark_cnt_cos0;
+	/* Number of packets marked in CoS queue 1. */
+	uint32_t	mark_cnt_cos1;
+	/* Number of packets marked in CoS queue 2. */
+	uint32_t	mark_cnt_cos2;
+	/* Number of packets marked in CoS queue 3. */
+	uint32_t	mark_cnt_cos3;
+	/* Number of packets marked in CoS queue 4. */
+	uint32_t	mark_cnt_cos4;
+	/* Number of packets marked in CoS queue 5. */
+	uint32_t	mark_cnt_cos5;
+	/* Number of packets marked in CoS queue 6. */
+	uint32_t	mark_cnt_cos6;
+	/* Number of packets marked in CoS queue 7. */
+	uint32_t	mark_cnt_cos7;
+	/*
+	 * Bitmask that indicates which CoS queues have ECN marking enabled.
+	 * Bit i corresponds to CoS queue i.
+	 */
+	uint8_t	mark_en;
+	uint8_t	unused_0[6];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
 /***********************
  * hwrm_port_clr_stats *
  ***********************/
@@ -18322,7 +19283,7 @@ struct hwrm_port_phy_mdio_bus_acquire_input {
 	 * Timeout in milli seconds, MDIO BUS will be released automatically
 	 * after this time, if another mdio acquire command is not received
 	 * within the timeout window from the same client.
-	 * A 0xFFFF will hold the bus until this bus is released.
+	 * A 0xFFFF will hold the bus untill this bus is released.
 	 */
 	uint16_t	mdio_bus_timeout;
 	uint8_t	unused_0[2];
@@ -19158,6 +20119,30 @@ struct hwrm_queue_pfcenable_qcfg_output {
 	/* If set to 1, then PFC is enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	uint8_t	unused_0[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -19229,6 +20214,30 @@ struct hwrm_queue_pfcenable_cfg_input {
 	/* If set to 1, then PFC is requested to be enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	/*
 	 * Port ID of port for which the table is being configured.
 	 * The HWRM needs to check whether this function is allowed
@@ -31831,15 +32840,2172 @@ struct hwrm_cfa_eem_qcfg_input {
 	 */
 	uint64_t	resp_addr;
 	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint32_t	unused_0;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint32_t	unused_0;
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
+struct hwrm_cfa_eem_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
+		UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
+		UINT32_C(0x2)
+	/* When set to 1, all offloaded flows will be sent to EEM. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x4)
+	/* The number of entries the FW has configured for EEM. */
+	uint32_t	num_entries;
+	/* Configured EEM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EEM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EEM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	uint8_t	unused_2[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*******************
+ * hwrm_cfa_eem_op *
+ *******************/
+
+
+/* hwrm_cfa_eem_op_input (size:192b/24B) */
+struct hwrm_cfa_eem_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the TX flow offload function specified in fid.
+	 * Note if this bit is set then the path_rx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the RX flow offload function specified in fid.
+	 * Note if this bit is set then the path_tx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint16_t	unused_0;
+	/* The number of EEM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
+	/*
+	 * To properly stop EEM and ensure there are no DMA's, the caller
+	 * must disable EEM for the given PF, using this call. This will
+	 * safely disable EEM and ensure that all DMA'ed to the
+	 * keys/records/efc have been completed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EEM host memory has been configured, EEM options have
+	 * been configured. Then the caller should enable EEM for the given
+	 * PF. Note once this call has been made, then the EEM mechanism
+	 * will be active and DMA's will occur as packets are processed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EEM settings for the given PF so that the register values
+	 * are reset back to there initial state.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
+	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
+		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_op_output (size:128b/16B) */
+struct hwrm_cfa_eem_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************************
+ * hwrm_cfa_adv_flow_mgnt_qcaps *
+ ********************************/
+
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	unused_0[4];
+} __attribute__((packed));
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * Value of 1 to indicate firmware support 16-bit flow handle.
+	 * Value of 0 to indicate firmware not support 16-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * Value of 1 to indicate firmware support 64-bit flow handle.
+	 * Value of 0 to indicate firmware not support 64-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
+		UINT32_C(0x2)
+	/*
+	 * Value of 1 to indicate firmware support flow batch delete operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 to indicate that the firmware does not support flow batch delete
+	 * operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * Value of 1 to indicate that the firmware support flow reset all operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 indicates firmware does not support flow reset all operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
+		UINT32_C(0x8)
+	/*
+	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
+	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
+	 * Value of 0 indicates firmware does not support use of FID as dest_id.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
+		UINT32_C(0x10)
+	/*
+	 * Value of 1 to indicate that firmware supports TX EEM flows.
+	 * Value of 0 indicates firmware does not support TX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x20)
+	/*
+	 * Value of 1 to indicate that firmware supports RX EEM flows.
+	 * Value of 0 indicates firmware does not support RX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x40)
+	/*
+	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
+	 * on-chip flow counter which can be used for EEM flows.
+	 * Value of 0 indicates firmware does not support the dynamic allocation of an
+	 * on-chip flow counter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
+		UINT32_C(0x80)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
+	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
+		UINT32_C(0x100)
+	/*
+	 * Value of 1 to indicate that firmware supports untagged matching
+	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
+	 * indicates firmware does not support untagged matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
+		UINT32_C(0x200)
+	/*
+	 * Value of 1 to indicate that firmware supports XDP filter. Value
+	 * of 0 indicates firmware does not support XDP filter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
+		UINT32_C(0x400)
+	/*
+	 * Value of 1 to indicate that the firmware support L2 header source
+	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
+	 * Value of 0 indicates firmware does not support L2 header source
+	 * fields matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
+		UINT32_C(0x800)
+	/*
+	 * If set to 1, firmware is capable of supporting ARP ethertype as
+	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
+	 * RX direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
+		UINT32_C(0x1000)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
+	 * command. Value of 0 indicates firmware does not support
+	 * rfs_ring_tbl_idx in dst_id field.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
+		UINT32_C(0x2000)
+	/*
+	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
+	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
+	 * direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
+		UINT32_C(0x4000)
+	uint8_t	unused_0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************
+ * hwrm_cfa_tflib *
+ ******************/
+
+
+/* hwrm_cfa_tflib_input (size:1024b/128B) */
+struct hwrm_cfa_tflib_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TFLIB request data. */
+	uint32_t	tf_req[26];
+} __attribute__((packed));
+
+/* hwrm_cfa_tflib_output (size:5632b/704B) */
+struct hwrm_cfa_tflib_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* TFLIB response code */
+	uint32_t	tf_resp_code;
+	/* TFLIB response data. */
+	uint32_t	tf_resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********
+ * hwrm_tf *
+ ***********/
+
+
+/* hwrm_tf_input (size:1024b/128B) */
+struct hwrm_tf_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TF request data. */
+	uint32_t	req[26];
+} __attribute__((packed));
+
+/* hwrm_tf_output (size:5632b/704B) */
+struct hwrm_tf_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* TF response code */
+	uint32_t	resp_code;
+	/* TF response data. */
+	uint32_t	resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_version_get *
+ ***********************/
+
+
+/* hwrm_tf_version_get_input (size:128b/16B) */
+struct hwrm_tf_version_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_open_output (size:128b/16B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_session_close *
+ *************************/
+
+
+/* hwrm_tf_session_close_input (size:192b/24B) */
+struct hwrm_tf_session_close_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_close_output (size:128b/16B) */
+struct hwrm_tf_session_close_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_qcfg *
+ ************************/
+
+
+/* hwrm_tf_session_qcfg_input (size:192b/24B) */
+struct hwrm_tf_session_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_qcfg_output (size:128b/16B) */
+struct hwrm_tf_session_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* RX action control settings flags. */
+	uint8_t	rx_act_flags;
+	/*
+	 * A value of 1 in this field indicates that Global Flow ID
+	 * reporting into cfa_code and cfa_metadata is enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_GFID_EN \
+		UINT32_C(0x1)
+	/*
+	 * A value of 1 in this field indicates that both inner and outer
+	 * are stripped and inner tag is passed.
+	 * Enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_VTAG_DLT_BOTH \
+		UINT32_C(0x2)
+	/*
+	 * A value of 1 in this field indicates that the re-use of
+	 * existing tunnel L2 header SMAC is enabled for
+	 * Non-tunnel L2, L2-L3 and IP-IP tunnel.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_TECT_SMAC_OVR_RUTNSL2 \
+		UINT32_C(0x4)
+	/* TX Action control settings flags. */
+	uint8_t	tx_act_flags;
+	/* Disabled. */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_ABCR_VEB_EN \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1 any GRE tunnels will include the
+	 * optional Key field.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_GRE_SET_K \
+		UINT32_C(0x2)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV6 Traffic Class (TC)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap
+	 * record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV6_TC_IH \
+		UINT32_C(0x4)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV4 Type Of Service (TOS)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV4_TOS_IH \
+		UINT32_C(0x8)
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_qcaps *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_qcaps_input (size:256b/32B) */
+struct hwrm_tf_session_resc_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * buffer. The size should be set to the Resource Manager
+	 * provided max qcaps value that is device specific. This is
+	 * the max size possible.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the qcaps output data
+	 * array. Array is of tf_rm_cap type and is device specific.
+	 */
+	uint64_t	qcaps_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_qcaps_output (size:192b/24B) */
+struct hwrm_tf_session_resc_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Session reservation strategy. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+		0
+	/* Static partitioning. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+		UINT32_C(0x0)
+	/* Strategy 1. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+		UINT32_C(0x1)
+	/* Strategy 2. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+		UINT32_C(0x2)
+	/* Strategy 3. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	/*
+	 * Size of the returned tf_rm_cap data array. The value
+	 * cannot exceed the size defined by the input msg. The data
+	 * array is returned using the qcaps_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
+	/* unused. */
+	uint16_t	unused0;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_alloc *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+struct hwrm_tf_session_resc_alloc_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided num_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the num input data array
+	 * buffer. Array is of tf_rm_num type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	num_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
+struct hwrm_tf_session_resc_alloc_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*****************************
+ * hwrm_tf_session_resc_free *
+ *****************************/
+
+
+/* hwrm_tf_session_resc_free_input (size:256b/32B) */
+struct hwrm_tf_session_resc_free_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided free_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the free input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size field of this message.
+	 */
+	uint64_t	free_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_free_output (size:128b/16B) */
+struct hwrm_tf_session_resc_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_flush *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_flush_input (size:256b/32B) */
+struct hwrm_tf_session_resc_flush_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided flush_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the flush input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	flush_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_flush_output (size:128b/16B) */
+struct hwrm_tf_session_resc_flush_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/* TruFlow RM capability of a resource. */
+/* tf_rm_cap (size:64b/8B) */
+struct tf_rm_cap {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Minimum value. */
+	uint16_t	min;
+	/* Maximum value. */
+	uint16_t	max;
+} __attribute__((packed));
+
+/* TruFlow RM number of a resource. */
+/* tf_rm_num (size:64b/8B) */
+struct tf_rm_num {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of resources. */
+	uint32_t	num;
+} __attribute__((packed));
+
+/* TruFlow RM reservation information. */
+/* tf_rm_res (size:64b/8B) */
+struct tf_rm_res {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Start offset. */
+	uint16_t	start;
+	/* Number of resources. */
+	uint16_t	stride;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_get *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_get_input (size:256b/32B) */
+struct hwrm_tf_tbl_type_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_get_output (size:1216b/152B) */
+struct hwrm_tf_tbl_type_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Response code. */
+	uint32_t	resp_code;
+	/* Response size. */
+	uint16_t	size;
+	/* unused */
+	uint16_t	unused0;
+	/* Response data. */
+	uint8_t	data[128];
+	/* unused */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_set *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_set_input (size:1024b/128B) */
+struct hwrm_tf_tbl_type_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+	/* Size of the data to set. */
+	uint16_t	size;
+	/* unused */
+	uint8_t	unused1[6];
+	/* Data to be set. */
+	uint8_t	data[88];
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_set_output (size:128b/16B) */
+struct hwrm_tf_tbl_type_set_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_ctxt_mem_rgtr *
+ *************************/
+
+
+/* hwrm_tf_ctxt_mem_rgtr_input (size:256b/32B) */
+struct hwrm_tf_ctxt_mem_rgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Counter PBL indirect levels. */
+	uint8_t	page_level;
+	/* PBL pointer is physical start address. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_0 UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_1 UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing
+	 * to PTE tables.
+	 */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2 UINT32_C(0x2)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2
+	/* Page size. */
+	uint8_t	page_size;
+	/* 4KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K   UINT32_C(0x0)
+	/* 8KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K   UINT32_C(0x1)
+	/* 64KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K  UINT32_C(0x4)
+	/* 256KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K UINT32_C(0x6)
+	/* 1MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M   UINT32_C(0x8)
+	/* 2MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M   UINT32_C(0x9)
+	/* 4MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M   UINT32_C(0xa)
+	/* 1GB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G   UINT32_C(0x12)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+	/* unused. */
+	uint32_t	unused0;
+	/* Pointer to the PBL, or PDL depending on number of levels */
+	uint64_t	page_dir;
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_rgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_rgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***************************
+ * hwrm_tf_ctxt_mem_unrgtr *
+ ***************************/
+
+
+/* hwrm_tf_ctxt_mem_unrgtr_input (size:192b/24B) */
+struct hwrm_tf_ctxt_mem_unrgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[6];
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_unrgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_unrgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_ext_em_qcaps *
+ ************************/
+
+
+/* hwrm_tf_ext_em_qcaps_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcaps_output (size:320b/40B) */
+struct hwrm_tf_ext_em_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the the FW supports the Centralized
+	 * Memory Model. The concept designates one entity for the
+	 * memory allocation while all others ‘subscribe’ to it.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the the FW supports the Detached
+	 * Centralized Memory Model. The memory is allocated and managed
+	 * as a separate entity. All PFs and VFs will be granted direct
+	 * or semi-direct access to the allocated memory while none of
+	 * which can interfere with the management of the memory.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_DETACHED_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+	/* Support flags. */
+	uint32_t	supported;
+	/*
+	 * If set to 1, then EXT EM KEY0 table is supported using
+	 * crc32 hash.
+	 * If set to 0, EXT EM KEY0 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY0_TABLE \
+		UINT32_C(0x1)
+	/*
+	 * If set to 1, then EXT EM KEY1 table is supported using
+	 * lookup3 hash.
+	 * If set to 0, EXT EM KEY1 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY1_TABLE \
+		UINT32_C(0x2)
+	/*
+	 * If set to 1, then EXT EM External Record table is supported.
+	 * If set to 0, EXT EM External Record table is not
+	 * supported.  (This table includes action record, EFC
+	 * pointers, encap pointers)
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_RECORD_TABLE \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then EXT EM External Flow Counters table is
+	 * supported.
+	 * If set to 0, EXT EM External Flow Counters table is not
+	 * supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_FLOW_COUNTERS_TABLE \
+		UINT32_C(0x8)
+	/*
+	 * If set to 1, then FID table used for implicit flow flush
+	 * is supported.
+	 * If set to 0, then FID table used for implicit flow flush
+	 * is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_FID_TABLE \
+		UINT32_C(0x10)
+	/*
+	 * The maximum number of entries supported by EXT EM. When
+	 * configuring the host memory the number of numbers of
+	 * entries that can supported are -
+	 *      32k, 64k 128k, 256k, 512k, 1M, 2M, 4M, 8M, 32M, 64M,
+	 *      128M entries.
+	 * Any value that are not these values, the FW will round
+	 * down to the closest support number of entries.
+	 */
+	uint32_t	max_entries_supported;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM
+	 * KEY0/KEY1 tables.
+	 */
+	uint16_t	key_entry_size;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM RECORD
+	 * tables.
+	 */
+	uint16_t	record_entry_size;
+	/* The entry size in bytes of each entry in the EXT EM EFC tables. */
+	uint16_t	efc_entry_size;
+	/* The FID size in bytes of each entry in the EXT EM FID tables. */
+	uint16_t	fid_entry_size;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*********************
+ * hwrm_tf_ext_em_op *
+ *********************/
+
+
+/* hwrm_tf_ext_em_op_input (size:192b/24B) */
+struct hwrm_tf_ext_em_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint16_t	unused0;
+	/* The number of EXT EM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_RESERVED       UINT32_C(0x0)
+	/*
+	 * To properly stop EXT EM and ensure there are no DMA's,
+	 * the caller must disable EXT EM for the given PF, using
+	 * this call. This will safely disable EXT EM and ensure
+	 * that all DMA'ed to the keys/records/efc have been
+	 * completed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EXT EM host memory has been configured, EXT EM
+	 * options have been configured. Then the caller should
+	 * enable EXT EM for the given PF. Note once this call has
+	 * been made, then the EXT EM mechanism will be active and
+	 * DMA's will occur as packets are processed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EXT EM settings for the given PF so that the
+	 * register values are reset back to their initial state.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP UINT32_C(0x3)
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP
+	/* unused. */
+	uint16_t	unused1;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_op_output (size:128b/16B) */
+struct hwrm_tf_ext_em_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_tf_ext_em_cfg *
+ **********************/
+
+
+/* hwrm_tf_ext_em_cfg_input (size:384b/48B) */
+struct hwrm_tf_ext_em_cfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* When set to 1, secondary, 0 means primary. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_SECONDARY_PF \
+		UINT32_C(0x4)
+	/*
+	 * Group_id which used by Firmware to identify memory pools belonging
+	 * to certain group.
+	 */
+	uint16_t	group_id;
+	/*
+	 * Dynamically reconfigure EEM pending cache every 1/10th of second.
+	 * If set to 0 it will disable the EEM HW flush of the pending cache.
+	 */
+	uint8_t	flush_interval;
+	/* unused. */
+	uint8_t	unused0;
+	/*
+	 * Configured EXT EM with the given number of entries. All
+	 * the EXT EM tables KEY0, KEY1, RECORD, EFC all have the
+	 * same number of entries and all tables will be configured
+	 * using this value. Current minimum value is 32k. Current
+	 * maximum value is 128M.
+	 */
+	uint32_t	num_entries;
+	/* unused. */
+	uint32_t	unused1;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint16_t	unused2;
+	/* unused. */
+	uint32_t	unused3;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_cfg_output (size:128b/16B) */
+struct hwrm_tf_ext_em_cfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_ext_em_qcfg *
+ ***********************/
+
+
+/* hwrm_tf_ext_em_qcfg_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcfg_output (size:256b/32B) */
+struct hwrm_tf_ext_em_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* The number of entries the FW has configured for EXT EM. */
+	uint32_t	num_entries;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************
+ * hwrm_tf_tcam_set *
+ ********************/
+
+
+/* hwrm_tf_tcam_set_input (size:1024b/128B) */
+struct hwrm_tf_tcam_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX
+	/*
+	 * Indicate device data is being sent via DMA, the device
+	 * data is packing does not change.
+	 */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA     UINT32_C(0x2)
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of TCAM entry. */
+	uint16_t	idx;
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM result. */
+	uint8_t	result_size;
+	/*
+	 * Offset from which the mask bytes start in the device data
+	 * array, key offset is always 0.
+	 */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[6];
+	/*
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[88];
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
-struct hwrm_cfa_eem_qcfg_output {
+/* hwrm_tf_tcam_set_output (size:128b/16B) */
+struct hwrm_tf_tcam_set_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31848,46 +35014,26 @@ struct hwrm_cfa_eem_qcfg_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
-		UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
-		UINT32_C(0x2)
-	/* When set to 1, all offloaded flows will be sent to EEM. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
-		UINT32_C(0x4)
-	/* The number of entries the FW has configured for EEM. */
-	uint32_t	num_entries;
-	/* Configured EEM with the given context if for KEY0 table. */
-	uint16_t	key0_ctx_id;
-	/* Configured EEM with the given context if for KEY1 table. */
-	uint16_t	key1_ctx_id;
-	/* Configured EEM with the given context if for RECORD table. */
-	uint16_t	record_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	efc_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	fid_ctx_id;
-	uint8_t	unused_2[5];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/*******************
- * hwrm_cfa_eem_op *
- *******************/
+/********************
+ * hwrm_tf_tcam_get *
+ ********************/
 
 
-/* hwrm_cfa_eem_op_input (size:192b/24B) */
-struct hwrm_cfa_eem_op_input {
+/* hwrm_tf_tcam_get_input (size:256b/32B) */
+struct hwrm_tf_tcam_get_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -31916,49 +35062,31 @@ struct hwrm_cfa_eem_op_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
 	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX
 	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the TX flow offload function specified in fid.
-	 * Note if this bit is set then the path_rx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the RX flow offload function specified in fid.
-	 * Note if this bit is set then the path_tx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint16_t	unused_0;
-	/* The number of EEM key table entries to be configured. */
-	uint16_t	op;
-	/* This value is reserved and should not be used. */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
-	/*
-	 * To properly stop EEM and ensure there are no DMA's, the caller
-	 * must disable EEM for the given PF, using this call. This will
-	 * safely disable EEM and ensure that all DMA'ed to the
-	 * keys/records/efc have been completed.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
-	/*
-	 * Once the EEM host memory has been configured, EEM options have
-	 * been configured. Then the caller should enable EEM for the given
-	 * PF. Note once this call has been made, then the EEM mechanism
-	 * will be active and DMA's will occur as packets are processed.
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
 	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
-	/*
-	 * Clear EEM settings for the given PF so that the register values
-	 * are reset back to there initial state.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
-	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
-		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+	uint32_t	type;
+	/* Index of a TCAM entry. */
+	uint16_t	idx;
+	/* unused. */
+	uint16_t	unused0;
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_op_output (size:128b/16B) */
-struct hwrm_cfa_eem_op_output {
+/* hwrm_tf_tcam_get_output (size:2368b/296B) */
+struct hwrm_tf_tcam_get_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31967,24 +35095,41 @@ struct hwrm_cfa_eem_op_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM entry. */
+	uint8_t	result_size;
+	/* Offset from which the mask bytes start in the device data array. */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[4];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[272];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/********************************
- * hwrm_cfa_adv_flow_mgnt_qcaps *
- ********************************/
+/*********************
+ * hwrm_tf_tcam_move *
+ *********************/
 
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+/* hwrm_tf_tcam_move_input (size:1024b/128B) */
+struct hwrm_tf_tcam_move_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32013,11 +35158,33 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	uint32_t	unused_0[4];
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index pairs to be swapped for the device. */
+	uint16_t	count;
+	/* unused. */
+	uint16_t	unused0;
+	/* TCAM index pairs to be swapped for the device. */
+	uint16_t	idx_pairs[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+/* hwrm_tf_tcam_move_output (size:128b/16B) */
+struct hwrm_tf_tcam_move_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32026,131 +35193,26 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/*
-	 * Value of 1 to indicate firmware support 16-bit flow handle.
-	 * Value of 0 to indicate firmware not support 16-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
-		UINT32_C(0x1)
-	/*
-	 * Value of 1 to indicate firmware support 64-bit flow handle.
-	 * Value of 0 to indicate firmware not support 64-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
-		UINT32_C(0x2)
-	/*
-	 * Value of 1 to indicate firmware support flow batch delete operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 to indicate that the firmware does not support flow batch delete
-	 * operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
-		UINT32_C(0x4)
-	/*
-	 * Value of 1 to indicate that the firmware support flow reset all operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 indicates firmware does not support flow reset all operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
-		UINT32_C(0x8)
-	/*
-	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
-	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
-	 * Value of 0 indicates firmware does not support use of FID as dest_id.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
-		UINT32_C(0x10)
-	/*
-	 * Value of 1 to indicate that firmware supports TX EEM flows.
-	 * Value of 0 indicates firmware does not support TX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x20)
-	/*
-	 * Value of 1 to indicate that firmware supports RX EEM flows.
-	 * Value of 0 indicates firmware does not support RX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x40)
-	/*
-	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
-	 * on-chip flow counter which can be used for EEM flows.
-	 * Value of 0 indicates firmware does not support the dynamic allocation of an
-	 * on-chip flow counter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
-		UINT32_C(0x80)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
-	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
-		UINT32_C(0x100)
-	/*
-	 * Value of 1 to indicate that firmware supports untagged matching
-	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
-	 * indicates firmware does not support untagged matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
-		UINT32_C(0x200)
-	/*
-	 * Value of 1 to indicate that firmware supports XDP filter. Value
-	 * of 0 indicates firmware does not support XDP filter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
-		UINT32_C(0x400)
-	/*
-	 * Value of 1 to indicate that the firmware support L2 header source
-	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
-	 * Value of 0 indicates firmware does not support L2 header source
-	 * fields matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
-		UINT32_C(0x800)
-	/*
-	 * If set to 1, firmware is capable of supporting ARP ethertype as
-	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
-	 * RX direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
-		UINT32_C(0x1000)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
-	 * command. Value of 0 indicates firmware does not support
-	 * rfs_ring_tbl_idx in dst_id field.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
-		UINT32_C(0x2000)
-	/*
-	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
-	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
-	 * direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
-		UINT32_C(0x4000)
-	uint8_t	unused_0[3];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/******************
- * hwrm_cfa_tflib *
- ******************/
+/*********************
+ * hwrm_tf_tcam_free *
+ *********************/
 
 
-/* hwrm_cfa_tflib_input (size:1024b/128B) */
-struct hwrm_cfa_tflib_input {
+/* hwrm_tf_tcam_free_input (size:1024b/128B) */
+struct hwrm_tf_tcam_free_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32179,18 +35241,33 @@ struct hwrm_cfa_tflib_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index to be deleted for the device. */
+	uint16_t	count;
 	/* unused. */
-	uint8_t	unused0[4];
-	/* TFLIB request data. */
-	uint32_t	tf_req[26];
+	uint16_t	unused0;
+	/* TCAM index list to be deleted for the device. */
+	uint16_t	idx_list[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_tflib_output (size:5632b/704B) */
-struct hwrm_cfa_tflib_output {
+/* hwrm_tf_tcam_free_output (size:128b/16B) */
+struct hwrm_tf_tcam_free_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32199,22 +35276,15 @@ struct hwrm_cfa_tflib_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
-	/* TFLIB response code */
-	uint32_t	tf_resp_code;
-	/* TFLIB response data. */
-	uint32_t	tf_resp[170];
 	/* unused. */
-	uint8_t	unused1[7];
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
@@ -33155,9 +36225,9 @@ struct pcie_ctx_hw_stats {
 	uint64_t	pcie_tl_signal_integrity;
 	/* Number of times LTSSM entered Recovery state */
 	uint64_t	pcie_link_integrity;
-	/* Number of TLP bytes that have been transmitted */
+	/* Report number of TLP bits that have been transmitted in Mbps */
 	uint64_t	pcie_tx_traffic_rate;
-	/* Number of TLP bytes that have been received */
+	/* Report number of TLP bits that have been received in Mbps */
 	uint64_t	pcie_rx_traffic_rate;
 	/* Number of DLLP bytes that have been transmitted */
 	uint64_t	pcie_tx_dllp_statistics;
@@ -33981,7 +37051,23 @@ struct hwrm_nvm_modify_input {
 	uint64_t	host_src_addr;
 	/* 16-bit directory entry index. */
 	uint16_t	dir_idx;
-	uint8_t	unused_0[2];
+	uint16_t	flags;
+	/*
+	 * This flag indicates the sender wants to modify a continuous NVRAM
+	 * area using a batch of this HWRM requests. The offset of a request
+	 * must be continuous to the end of previous request's. Firmware does
+	 * not update the directory entry until receiving the last request,
+	 * which is indicated by the batch_last flag.
+	 * This flag is set usually when a sender does not have a block of
+	 * memory that is big enough to hold the entire NVRAM data for send
+	 * at one time.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_MODE     UINT32_C(0x1)
+	/*
+	 * This flag can be used only when the batch_mode flag is set.
+	 * It indicates this request is the last of batch requests.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_LAST     UINT32_C(0x2)
 	/* 32-bit NVRAM byte-offset to modify content from. */
 	uint32_t	offset;
 	/*
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 02/33] net/bnxt: update hwrm prep to use ptr
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
  2020-03-17 15:37 ` [dpdk-dev] [PATCH 01/33] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 03/33] net/bnxt: add truflow message handlers Venkat Duvvuru
                   ` (31 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher

From: Randy Schacher <stuart.schacher@broadcom.com>

- Change HWRM_PREP to use pointer and use the full
  HWRM enum

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   3 +-
 drivers/net/bnxt/bnxt_hwrm.c | 206 ++++++++++++++++++++++---------------------
 2 files changed, 107 insertions(+), 102 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3ae08a2..07fb4df 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -594,7 +594,7 @@ struct bnxt {
 
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
 
-	uint16_t			hwrm_cmd_seq;
+	uint16_t			chimp_cmd_seq;
 	uint16_t			kong_cmd_seq;
 	void				*hwrm_cmd_resp_addr;
 	rte_iova_t			hwrm_cmd_resp_dma_addr;
@@ -610,6 +610,7 @@ struct bnxt {
 #define DFLT_HWRM_CMD_TIMEOUT		500000
 	 /* short command timeout value of 50ms */
 #define SHORT_HWRM_CMD_TIMEOUT		50000
+#define HWRM_CMD_TIMEOUT_TRUFLOW        DFLT_HWRM_CMD_TIMEOUT
 	/* default HWRM request timeout value */
 	uint32_t			hwrm_cmd_timeout;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index a9c9c72..2fb78b6 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -100,7 +100,11 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 	if (bp->flags & BNXT_FLAG_FATAL_ERROR)
 		return 0;
 
-	timeout = bp->hwrm_cmd_timeout;
+	/* For VER_GET command, set timeout as 50ms */
+	if (rte_cpu_to_le_16(req->req_type) == HWRM_VER_GET)
+		timeout = HWRM_CMD_TIMEOUT_TRUFLOW;
+	else
+		timeout = bp->hwrm_cmd_timeout;
 
 	if (bp->flags & BNXT_FLAG_SHORT_CMD ||
 	    msg_len > bp->max_req_len) {
@@ -182,19 +186,19 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
  *
  * HWRM_UNLOCK() must be called after all response processing is completed.
  */
-#define HWRM_PREP(req, type, kong) do { \
+#define HWRM_PREP(req, type, kong) do {	\
 	rte_spinlock_lock(&bp->hwrm_lock); \
 	if (bp->hwrm_cmd_resp_addr == NULL) { \
 		rte_spinlock_unlock(&bp->hwrm_lock); \
 		return -EACCES; \
 	} \
 	memset(bp->hwrm_cmd_resp_addr, 0, bp->max_resp_len); \
-	req.req_type = rte_cpu_to_le_16(HWRM_##type); \
-	req.cmpl_ring = rte_cpu_to_le_16(-1); \
-	req.seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
-		rte_cpu_to_le_16(bp->hwrm_cmd_seq++); \
-	req.target_id = rte_cpu_to_le_16(0xffff); \
-	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
+	(req)->req_type = rte_cpu_to_le_16(type); \
+	(req)->cmpl_ring = rte_cpu_to_le_16(-1); \
+	(req)->seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
+		rte_cpu_to_le_16(bp->chimp_cmd_seq++); \
+	(req)->target_id = rte_cpu_to_le_16(0xffff); \
+	(req)->resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
 } while (0)
 
 #define HWRM_CHECK_RESULT_SILENT() do {\
@@ -263,7 +267,7 @@ int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_cfa_l2_set_rx_mask_input req = {.req_type = 0 };
 	struct hwrm_cfa_l2_set_rx_mask_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.mask = 0;
 
@@ -288,7 +292,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 	if (vnic->fw_vnic_id == INVALID_HW_RING_ID)
 		return rc;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
 	if (vnic->flags & BNXT_VNIC_INFO_BCAST)
@@ -347,7 +351,7 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 				return 0;
 		}
 	}
-	HWRM_PREP(req, CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(fid);
 
 	req.vlan_tag_mask_tbl_addr =
@@ -389,7 +393,7 @@ int bnxt_hwrm_clear_l2_filter(struct bnxt *bp,
 	if (l2_filter->l2_ref_cnt > 0)
 		return 0;
 
-	HWRM_PREP(req, CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
 
@@ -440,7 +444,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	if (filter->fw_l2_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_l2_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -503,7 +507,7 @@ int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (ptp->rx_filter)
 		flags |= HWRM_PORT_MAC_CFG_INPUT_FLAGS_PTP_RX_TS_CAPTURE_ENABLE;
@@ -536,7 +540,7 @@ static int bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
 	if (ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(bp->pf.port_id);
 
@@ -591,7 +595,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	uint32_t flags;
 	int i;
 
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 
@@ -721,7 +725,7 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp)
 	struct hwrm_vnic_qcaps_input req = {.req_type = 0 };
 	struct hwrm_vnic_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.target_id = rte_cpu_to_le_16(0xffff);
 
@@ -748,7 +752,7 @@ int bnxt_hwrm_func_reset(struct bnxt *bp)
 	struct hwrm_func_reset_input req = {.req_type = 0 };
 	struct hwrm_func_reset_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_RESET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESET, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(0);
 
@@ -781,7 +785,7 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 	if ((BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) && !BNXT_STINGRAY(bp))
 		flags |= HWRM_FUNC_DRV_RGTR_INPUT_FLAGS_MASTER_SUPPORT;
 
-	HWRM_PREP(req, FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER |
 			HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD);
 	req.ver_maj = RTE_VER_YEAR;
@@ -853,7 +857,7 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
 	struct hwrm_func_vf_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_vf_cfg_input req = {0};
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	enables = HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_RX_RINGS  |
 		  HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_TX_RINGS   |
@@ -919,7 +923,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 	struct hwrm_func_resource_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_resource_qcaps_input req = {0};
 
-	HWRM_PREP(req, FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -964,7 +968,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp, uint32_t timeout)
 
 	bp->max_req_len = HWRM_MAX_REQ_LEN;
 	bp->hwrm_cmd_timeout = timeout;
-	HWRM_PREP(req, VER_GET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VER_GET, BNXT_USE_CHIMP_MB);
 
 	req.hwrm_intf_maj = HWRM_VERSION_MAJOR;
 	req.hwrm_intf_min = HWRM_VERSION_MINOR;
@@ -1104,7 +1108,7 @@ int bnxt_hwrm_func_driver_unregister(struct bnxt *bp, uint32_t flags)
 	if (!(bp->flags & BNXT_FLAG_REGISTERED))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
 	req.flags = flags;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -1122,7 +1126,7 @@ static int bnxt_hwrm_port_phy_cfg(struct bnxt *bp, struct bnxt_link_info *conf)
 	struct hwrm_port_phy_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint32_t enables = 0;
 
-	HWRM_PREP(req, PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
 
 	if (conf->link_up) {
 		/* Setting Fixed Speed. But AutoNeg is ON, So disable it */
@@ -1186,7 +1190,7 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	struct hwrm_port_phy_qcfg_input req = {0};
 	struct hwrm_port_phy_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -1265,7 +1269,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	int i;
 
 get_rx_info:
-	HWRM_PREP(req, QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(dir);
 	/* HWRM Version >= 1.9.1 only if COS Classification is not required. */
@@ -1353,7 +1357,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 	struct rte_mempool *mb_pool;
 	uint16_t rx_buf_size;
 
-	HWRM_PREP(req, RING_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.page_tbl_addr = rte_cpu_to_le_64(ring->bd_dma);
 	req.fbo = rte_cpu_to_le_32(0);
@@ -1477,7 +1481,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 	struct hwrm_ring_free_input req = {.req_type = 0 };
 	struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_type = ring_type;
 	req.ring_id = rte_cpu_to_le_16(ring->fw_ring_id);
@@ -1525,7 +1529,7 @@ int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_alloc_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.cr = rte_cpu_to_le_16(bp->grp_info[idx].cp_fw_ring_id);
 	req.rr = rte_cpu_to_le_16(bp->grp_info[idx].rx_fw_ring_id);
@@ -1549,7 +1553,7 @@ int bnxt_hwrm_ring_grp_free(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_free_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_group_id = rte_cpu_to_le_16(bp->grp_info[idx].fw_grp_id);
 
@@ -1571,7 +1575,7 @@ int bnxt_hwrm_stat_clear(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 	if (cpr->hw_stats_ctx_id == (uint32_t)HWRM_NA_SIGNATURE)
 		return rc;
 
-	HWRM_PREP(req, STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1590,7 +1594,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_alloc_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.update_period_ms = rte_cpu_to_le_32(0);
 
@@ -1614,7 +1618,7 @@ int bnxt_hwrm_stat_ctx_free(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_free_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1648,7 +1652,7 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 skip_ring_grps:
 	vnic->mru = BNXT_VNIC_MRU(bp->eth_dev->data->mtu);
-	HWRM_PREP(req, VNIC_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_ALLOC, BNXT_USE_CHIMP_MB);
 
 	if (vnic->func_default)
 		req.flags =
@@ -1671,7 +1675,7 @@ static int bnxt_hwrm_vnic_plcmodes_qcfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_qcfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1704,7 +1708,7 @@ static int bnxt_hwrm_vnic_plcmodes_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.flags = rte_cpu_to_le_32(pmode->flags);
@@ -1743,7 +1747,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	if (rc)
 		return rc;
 
-	HWRM_PREP(req, VNIC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (BNXT_CHIP_THOR(bp)) {
 		int dflt_rxq = vnic->start_grp_id;
@@ -1847,7 +1851,7 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		PMD_DRV_LOG(DEBUG, "VNIC QCFG ID %d\n", vnic->fw_vnic_id);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_VNIC_QCFG_INPUT_ENABLES_VF_ID_VALID);
@@ -1890,7 +1894,7 @@ int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp,
 	struct hwrm_vnic_rss_cos_lb_ctx_alloc_output *resp =
 						bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -1919,7 +1923,7 @@ int _bnxt_hwrm_vnic_ctx_free(struct bnxt *bp,
 		PMD_DRV_LOG(DEBUG, "VNIC RSS Rule %x\n", vnic->rss_rule);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(ctx_idx);
 
@@ -1964,7 +1968,7 @@ int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_FREE, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1991,7 +1995,7 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
 	for (i = 0; i < nr_ctxs; i++) {
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -2029,7 +2033,7 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 	if (BNXT_CHIP_THOR(bp))
 		return bnxt_hwrm_vnic_rss_cfg_thor(bp, vnic);
 
-	HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 	req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
 	req.hash_mode_flags = vnic->hash_mode;
@@ -2062,7 +2066,7 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(
 			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT);
@@ -2103,7 +2107,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
 
 	if (enable) {
 		req.enables = rte_cpu_to_le_32(
@@ -2143,7 +2147,7 @@ int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, const uint8_t *mac_addr)
 	memcpy(req.dflt_mac_addr, mac_addr, sizeof(req.dflt_mac_addr));
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -2161,7 +2165,7 @@ int bnxt_hwrm_func_qstats_tx_drop(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2184,7 +2188,7 @@ int bnxt_hwrm_func_qstats(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2221,7 +2225,7 @@ int bnxt_hwrm_func_clr_stats(struct bnxt *bp, uint16_t fid)
 	struct hwrm_func_clr_stats_input req = {.req_type = 0};
 	struct hwrm_func_clr_stats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2928,7 +2932,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	uint16_t flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3037,7 +3041,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, int tx_rings)
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(enables);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3109,7 +3113,7 @@ static int reserve_resources_from_vf(struct bnxt *bp,
 	int rc;
 
 	/* Get the actual allocated values now */
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3147,7 +3151,7 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
 	int rc;
 
 	/* Check for zero MAC address */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3165,7 +3169,7 @@ static int update_pf_resource_max(struct bnxt *bp)
 	int rc;
 
 	/* And copy the allocated numbers into the pf struct */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3268,7 +3272,7 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
 	for (i = 0; i < num_vfs; i++) {
 		add_random_mac_if_needed(bp, &req, i);
 
-		HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 		req.flags = rte_cpu_to_le_32(bp->pf.vf_info[i].func_cfg_flags);
 		req.fid = rte_cpu_to_le_16(bp->pf.vf_info[i].fid);
 		rc = bnxt_hwrm_send_message(bp,
@@ -3324,7 +3328,7 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_EVB_MODE);
@@ -3344,7 +3348,7 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_val = port;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3375,7 +3379,7 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_free_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
 
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_id = rte_cpu_to_be_16(port);
@@ -3394,7 +3398,7 @@ int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.flags = rte_cpu_to_le_32(flags);
@@ -3424,7 +3428,7 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
 	struct hwrm_func_buf_rgtr_input req = {.req_type = 0 };
 	struct hwrm_func_buf_rgtr_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
 
 	req.req_buf_num_pages = rte_cpu_to_le_16(1);
 	req.req_buf_page_size = rte_cpu_to_le_16(
@@ -3455,7 +3459,7 @@ int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp)
 	if (!(BNXT_PF(bp) && bp->pdev->max_vfs))
 		return 0;
 
-	HWRM_PREP(req, FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3471,7 +3475,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags);
@@ -3493,7 +3497,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_vf_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
@@ -3515,7 +3519,7 @@ int bnxt_hwrm_set_default_vlan(struct bnxt *bp, int vf, uint8_t is_vf)
 	uint32_t func_cfg_flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (is_vf) {
 		dflt_vlan = bp->pf.vf_info[vf].dflt_vlan;
@@ -3547,7 +3551,7 @@ int bnxt_hwrm_func_bw_cfg(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(enables);
@@ -3567,7 +3571,7 @@ int bnxt_hwrm_set_vf_vlan(struct bnxt *bp, int vf)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
@@ -3604,7 +3608,7 @@ int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3624,7 +3628,7 @@ int bnxt_hwrm_func_qcfg_vf_default_mac(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3648,7 +3652,7 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3668,7 +3672,7 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 	struct hwrm_stat_ctx_query_input req = {.req_type = 0};
 	struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cid);
 
@@ -3706,7 +3710,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp)
 	struct bnxt_pf_info *pf = &bp->pf;
 	int rc;
 
-	HWRM_PREP(req, PORT_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	req.tx_stat_host_addr = rte_cpu_to_le_64(bp->hw_tx_port_stats_map);
@@ -3731,7 +3735,7 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	    BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3751,7 +3755,7 @@ int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 	if (BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
 	req.port_id = bp->pf.port_id;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3793,7 +3797,7 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	if (!bp->num_leds || BNXT_VF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, PORT_LED_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_CFG, BNXT_USE_CHIMP_MB);
 
 	if (led_on) {
 		led_state = HWRM_PORT_LED_CFG_INPUT_LED0_STATE_BLINKALT;
@@ -3826,7 +3830,7 @@ int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
 	struct hwrm_nvm_get_dir_info_input req = {0};
 	struct hwrm_nvm_get_dir_info_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3869,7 +3873,7 @@ int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data)
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3903,7 +3907,7 @@ int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_READ, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_READ, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	req.offset = rte_cpu_to_le_32(offset);
@@ -3925,7 +3929,7 @@ int bnxt_hwrm_erase_nvram_directory(struct bnxt *bp, uint8_t index)
 	struct hwrm_nvm_erase_dir_entry_input req = {0};
 	struct hwrm_nvm_erase_dir_entry_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3958,7 +3962,7 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
 	}
 	memcpy(buf, data, data_len);
 
-	HWRM_PREP(req, NVM_WRITE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_WRITE, BNXT_USE_CHIMP_MB);
 
 	req.dir_type = rte_cpu_to_le_16(dir_type);
 	req.dir_ordinal = rte_cpu_to_le_16(dir_ordinal);
@@ -4009,7 +4013,7 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
 	int rc;
 
 	/* First query all VNIC ids */
-	HWRM_PREP(req, FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.vf_id = rte_cpu_to_le_16(bp->pf.first_vf_id + vf);
 	req.max_vnic_id_cnt = rte_cpu_to_le_32(bp->pf.total_vnics);
@@ -4091,7 +4095,7 @@ int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(
@@ -4166,7 +4170,7 @@ int bnxt_hwrm_set_em_filter(struct bnxt *bp,
 	if (filter->fw_em_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_em_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4238,7 +4242,7 @@ int bnxt_hwrm_clear_em_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
 	if (filter->fw_em_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
 
 	req.em_filter_id = rte_cpu_to_le_64(filter->fw_em_filter_id);
 
@@ -4266,7 +4270,7 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_ntuple_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4346,7 +4350,7 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ntuple_filter_id = rte_cpu_to_le_64(filter->fw_ntuple_filter_id);
 
@@ -4377,7 +4381,7 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		struct bnxt_rx_ring_info *rxr;
 		struct bnxt_cp_ring_info *cpr;
 
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -4509,7 +4513,7 @@ static int bnxt_hwrm_set_coal_params_thor(struct bnxt *bp,
 	uint16_t flags;
 	int rc;
 
-	HWRM_PREP(req, RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
 
@@ -4546,7 +4550,7 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS, BNXT_USE_CHIMP_MB);
 	req.ring_id = rte_cpu_to_le_16(ring_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4571,7 +4575,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 	    bp->ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT_SILENT();
 
@@ -4650,7 +4654,7 @@ int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables)
 	if (!ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(enables);
 
 	if (enables & HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_QP) {
@@ -4743,7 +4747,7 @@ int bnxt_hwrm_ext_port_qstats(struct bnxt *bp)
 	      bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS))
 		return 0;
 
-	HWRM_PREP(req, PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	if (bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS) {
@@ -4784,7 +4788,7 @@ bnxt_hwrm_tunnel_redirect(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4803,7 +4807,7 @@ bnxt_hwrm_tunnel_redirect_free(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4821,7 +4825,7 @@ int bnxt_hwrm_tunnel_redirect_query(struct bnxt *bp, uint32_t *type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4842,7 +4846,7 @@ int bnxt_hwrm_tunnel_redirect_info(struct bnxt *bp, uint8_t tun_type,
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	req.tunnel_type = tun_type;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4867,7 +4871,7 @@ int bnxt_hwrm_set_mac(struct bnxt *bp)
 	if (!BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_FUNC_VF_CFG_INPUT_ENABLES_DFLT_MAC_ADDR);
@@ -4900,7 +4904,7 @@ int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
 	if (!up && (bp->flags & BNXT_FLAG_FW_RESET))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
 
 	if (up)
 		req.flags =
@@ -4946,7 +4950,7 @@ int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp)
 		memset(info, 0, sizeof(*info));
 	}
 
-	HWRM_PREP(req, ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -5022,7 +5026,7 @@ int bnxt_hwrm_fw_reset(struct bnxt *bp)
 	if (!BNXT_PF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, FW_RESET, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_FW_RESET, BNXT_USE_KONG(bp));
 
 	req.embedded_proc_type =
 		HWRM_FW_RESET_INPUT_EMBEDDED_PROC_TYPE_CHIP;
@@ -5050,7 +5054,7 @@ int bnxt_hwrm_port_ts_query(struct bnxt *bp, uint8_t path, uint64_t *timestamp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
 
 	switch (path) {
 	case BNXT_PTP_FLAGS_PATH_TX:
@@ -5098,7 +5102,7 @@ int bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(struct bnxt *bp)
 		return 0;
 	}
 
-	HWRM_PREP(req, CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_KONG(bp));
 
 	HWRM_CHECK_RESULT();
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 03/33] net/bnxt: add truflow message handlers
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
  2020-03-17 15:37 ` [dpdk-dev] [PATCH 01/33] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 02/33] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 04/33] net/bnxt: add initial tf core session open Venkat Duvvuru
                   ` (30 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough, Randy Schacher

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add bnxt message functions for truflow APIs

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 83 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h | 18 ++++++++++
 2 files changed, 101 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2fb78b6..5f0c13e 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -261,6 +261,89 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 
 #define HWRM_UNLOCK()		rte_spinlock_unlock(&bp->hwrm_lock)
 
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len)
+{
+	int rc = 0;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+	struct input *req = msg;
+	struct output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(req, msg_type, mailbox);
+
+	rc = bnxt_hwrm_send_message(bp, req, msg_len, mailbox);
+
+	HWRM_CHECK_RESULT();
+
+	if (resp_msg)
+		memcpy(resp_msg, resp, resp_len);
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len)
+{
+	int rc = 0;
+	struct hwrm_cfa_tflib_input req = { .req_type = 0 };
+	struct hwrm_cfa_tflib_output *resp = bp->hwrm_cmd_resp_addr;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+
+	if (msg_len > sizeof(req.tf_req))
+		return -ENOMEM;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(&req, HWRM_TF, mailbox);
+	/* Build request using the user supplied request payload.
+	 * TLV request size is checked at build time against HWRM
+	 * request max size, thus no checking required.
+	 */
+	req.tf_type = tf_type;
+	req.tf_subtype = tf_subtype;
+	memcpy(req.tf_req, msg, msg_len);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), mailbox);
+	HWRM_CHECK_RESULT();
+
+	/* Copy the resp to user provided response buffer */
+	if (response != NULL)
+		/* Post process response data. We need to copy only
+		 * the 'payload' as the HWRM data structure really is
+		 * HWRM header + msg header + payload and the TFLIB
+		 * only provided a payload place holder.
+		 */
+		if (response_len != 0) {
+			memcpy(response,
+			       resp->tf_resp,
+			       response_len);
+		}
+
+	/* Extract the internal tflib response code */
+	*tf_response_code = resp->tf_resp_code;
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 {
 	int rc = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 5eb2ee8..df7aa74 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -69,6 +69,24 @@ HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED
 	bp->rx_cos_queue[x].profile =	\
 		resp->queue_id##x##_service_profile
 
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len);
+
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len);
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp,
 				   struct bnxt_vnic_info *vnic);
 int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 04/33] net/bnxt: add initial tf core session open
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (2 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 03/33] net/bnxt: add truflow message handlers Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 05/33] net/bnxt: add initial tf core session close support Venkat Duvvuru
                   ` (29 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add infrastructure support
- Add tf_core open session support

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                |   8 +
 drivers/net/bnxt/bnxt.h                  |   7 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       | 971 +++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c       | 145 +++++
 drivers/net/bnxt/tf_core/tf_core.h       | 347 +++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        |  79 +++
 drivers/net/bnxt/tf_core/tf_msg.h        |  44 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h |  47 ++
 drivers/net/bnxt/tf_core/tf_project.h    |  24 +
 drivers/net/bnxt/tf_core/tf_resources.h  |  46 ++
 drivers/net/bnxt/tf_core/tf_rm.h         |  33 ++
 drivers/net/bnxt/tf_core/tf_session.h    |  85 +++
 drivers/net/bnxt/tf_core/tfp.c           | 163 ++++++
 drivers/net/bnxt/tf_core/tfp.h           | 188 ++++++
 14 files changed, 2187 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index b77532b..0686988 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -43,6 +43,14 @@ ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW), y)
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+endif
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 07fb4df..0142acb 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -21,6 +21,10 @@
 #include "bnxt_cpr.h"
 #include "bnxt_util.h"
 
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+#include "tf_core.h"
+#endif
+
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
 
@@ -680,6 +684,9 @@ struct bnxt {
 /* TCAM and EM should be 16-bit only. Other modes not supported. */
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+	struct tf               tfp;
+#endif
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
new file mode 100644
index 0000000..a8a5547
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -0,0 +1,971 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _HWRM_TF_H_
+#define _HWRM_TF_H_
+
+#include "tf_core.h"
+
+typedef enum tf_type {
+	TF_TYPE_TRUFLOW,
+	TF_TYPE_LAST = TF_TYPE_TRUFLOW,
+} tf_type_t;
+
+typedef enum tf_subtype {
+	HWRM_TFT_SESSION_ATTACH = 712,
+	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
+	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
+	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
+	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
+	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
+	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
+	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
+	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
+	HWRM_TFT_TBL_SCOPE_CFG = 731,
+	HWRM_TFT_EM_RULE_INSERT = 739,
+	HWRM_TFT_EM_RULE_DELETE = 740,
+	HWRM_TFT_REG_GET = 821,
+	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_TBL_TYPE_SET = 823,
+	HWRM_TFT_TBL_TYPE_GET = 824,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+} tf_subtype_t;
+
+/* Request and Response compile time checking */
+/* u32_t	tlv_req_value[26]; */
+#define TF_MAX_REQ_SIZE 104
+/* u32_t	tlv_resp_value[170]; */
+#define TF_MAX_RESP_SIZE 680
+#define BUILD_BUG_ON(condition) typedef char p__LINE__[(condition) ? 1 : -1]
+
+/* Use this to allocate/free any kind of
+ * indexes over HWRM and fill the parms pointer
+ */
+#define TF_BULK_RECV	 128
+#define TF_BULK_SEND	  16
+
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_INSERT_KEY_DATA 0x2e30UL
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_DELETE_KEY_DATA 0x2e40UL
+/* L2 Context DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_DMA_ADDR		0x2fe0UL
+/* L2 Context Entry */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY		0x2fe1UL
+/* Prof tcam DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_DMA_ADDR		0x3030UL
+/* Prof tcam Entry */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY		0x3031UL
+/* WC DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
+/* WC Entry */
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+/* Action Data */
+#define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
+#define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
+
+#define TF_BITS2BYTES(x) (((x) + 7) >> 3)
+#define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
+
+struct tf_session_attach_input;
+struct tf_session_hw_resc_qcaps_input;
+struct tf_session_hw_resc_qcaps_output;
+struct tf_session_hw_resc_alloc_input;
+struct tf_session_hw_resc_alloc_output;
+struct tf_session_hw_resc_free_input;
+struct tf_session_hw_resc_flush_input;
+struct tf_session_sram_resc_qcaps_input;
+struct tf_session_sram_resc_qcaps_output;
+struct tf_session_sram_resc_alloc_input;
+struct tf_session_sram_resc_alloc_output;
+struct tf_session_sram_resc_free_input;
+struct tf_session_sram_resc_flush_input;
+struct tf_tbl_type_set_input;
+struct tf_tbl_type_get_input;
+struct tf_tbl_type_get_output;
+struct tf_em_internal_insert_input;
+struct tf_em_internal_insert_output;
+struct tf_em_internal_delete_input;
+/* Input params for session attach */
+typedef struct tf_session_attach_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* Session Name */
+	char				 session_name[TF_SESSION_NAME_MAX];
+} tf_session_attach_input_t, *ptf_session_attach_input_t;
+BUILD_BUG_ON(sizeof(tf_session_attach_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_output {
+	/* Control Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Unused */
+	uint8_t			  unused[4];
+	/* Minimum guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_min;
+	/* Maximum non-guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_max;
+	/* Minimum guaranteed number of profile functions */
+	uint16_t			 prof_func_min;
+	/* Maximum non-guaranteed number of profile functions */
+	uint16_t			 prof_func_max;
+	/* Minimum guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_min;
+	/* Maximum non-guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_max;
+	/* Minimum guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_min;
+	/* Maximum non-guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_max;
+	/* Minimum guaranteed number of EM records entries */
+	uint16_t			 em_record_entries_min;
+	/* Maximum non-guaranteed number of EM record entries */
+	uint16_t			 em_record_entries_max;
+	/* Minimum guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_min;
+	/* Maximum non-guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_max;
+	/* Minimum guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_min;
+	/* Maximum non-guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_max;
+	/* Minimum guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_min;
+	/* Maximum non-guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_max;
+	/* Minimum guaranteed number of meter instances */
+	uint16_t			 meter_inst_min;
+	/* Maximum non-guaranteed number of meter instances */
+	uint16_t			 meter_inst_max;
+	/* Minimum guaranteed number of mirrors */
+	uint16_t			 mirrors_min;
+	/* Maximum non-guaranteed number of mirrors */
+	uint16_t			 mirrors_max;
+	/* Minimum guaranteed number of UPAR */
+	uint16_t			 upar_min;
+	/* Maximum non-guaranteed number of UPAR */
+	uint16_t			 upar_max;
+	/* Minimum guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_min;
+	/* Maximum non-guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_max;
+	/* Minimum guaranteed number of L2 Functions */
+	uint16_t			 l2_func_min;
+	/* Maximum non-guaranteed number of L2 Functions */
+	uint16_t			 l2_func_max;
+	/* Minimum guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_min;
+	/* Maximum non-guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_max;
+	/* Minimum guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_min;
+	/* Maximum non-guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_max;
+	/* Minimum guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_min;
+	/* Maximum non-guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_max;
+	/* Minimum guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_min;
+	/* Maximum non-guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_max;
+	/* Minimum guaranteed number of metadata */
+	uint16_t			 metadata_min;
+	/* Maximum non-guaranteed number of metadata */
+	uint16_t			 metadata_max;
+	/* Minimum guaranteed number of CT states */
+	uint16_t			 ct_state_min;
+	/* Maximum non-guaranteed number of CT states */
+	uint16_t			 ct_state_max;
+	/* Minimum guaranteed number of range profiles */
+	uint16_t			 range_prof_min;
+	/* Maximum non-guaranteed number range profiles */
+	uint16_t			 range_prof_max;
+	/* Minimum guaranteed number of range entries */
+	uint16_t			 range_entries_min;
+	/* Maximum non-guaranteed number of range entries */
+	uint16_t			 range_entries_max;
+	/* Minimum guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_min;
+	/* Maximum non-guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_max;
+} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of L2 CTX TCAM entries to be allocated */
+	uint16_t			 num_l2_ctx_tcam_entries;
+	/* Number of profile functions to be allocated */
+	uint16_t			 num_prof_func_entries;
+	/* Number of profile TCAM entries to be allocated */
+	uint16_t			 num_prof_tcam_entries;
+	/* Number of EM profile ids to be allocated */
+	uint16_t			 num_em_prof_id;
+	/* Number of EM records entries to be allocated */
+	uint16_t			 num_em_record_entries;
+	/* Number of WC profiles ids to be allocated */
+	uint16_t			 num_wc_tcam_prof_id;
+	/* Number of WC TCAM entries to be allocated */
+	uint16_t			 num_wc_tcam_entries;
+	/* Number of meter profiles to be allocated */
+	uint16_t			 num_meter_profiles;
+	/* Number of meter instances to be allocated */
+	uint16_t			 num_meter_inst;
+	/* Number of mirrors to be allocated */
+	uint16_t			 num_mirrors;
+	/* Number of UPAR to be allocated */
+	uint16_t			 num_upar;
+	/* Number of SP TCAM entries to be allocated */
+	uint16_t			 num_sp_tcam_entries;
+	/* Number of L2 functions to be allocated */
+	uint16_t			 num_l2_func;
+	/* Number of flexible key templates to be allocated */
+	uint16_t			 num_flex_key_templ;
+	/* Number of table scopes to be allocated */
+	uint16_t			 num_tbl_scope;
+	/* Number of epoch0 entries to be allocated */
+	uint16_t			 num_epoch0_entries;
+	/* Number of epoch1 entries to be allocated */
+	uint16_t			 num_epoch1_entries;
+	/* Number of metadata to be allocated */
+	uint16_t			 num_metadata;
+	/* Number of CT states to be allocated */
+	uint16_t			 num_ct_state;
+	/* Number of range profiles to be allocated */
+	uint16_t			 num_range_prof;
+	/* Number of range Entries to be allocated */
+	uint16_t			 num_range_entries;
+	/* Number of LAG table entries to be allocated */
+	uint16_t			 num_lag_tbl_entries;
+} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_output {
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW free */
+typedef struct tf_session_hw_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW flush */
+typedef struct tf_session_hw_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_output {
+	/* Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Minimum guaranteed number of Full Action */
+	uint16_t			 full_action_min;
+	/* Maximum non-guaranteed number of Full Action */
+	uint16_t			 full_action_max;
+	/* Minimum guaranteed number of MCG */
+	uint16_t			 mcg_min;
+	/* Maximum non-guaranteed number of MCG */
+	uint16_t			 mcg_max;
+	/* Minimum guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_min;
+	/* Maximum non-guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_max;
+	/* Minimum guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_min;
+	/* Maximum non-guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_max;
+	/* Minimum guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_min;
+	/* Maximum non-guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_max;
+	/* Minimum guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_min;
+	/* Maximum non-guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_max;
+	/* Minimum guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_max;
+	/* Minimum guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_max;
+	/* Minimum guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_min;
+	/* Maximum non-guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_max;
+	/* Minimum guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_min;
+	/* Maximum non-guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_max;
+	/* Minimum guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_min;
+	/* Maximum non-guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_max;
+	/* Minimum guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_min;
+	/* Maximum non-guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_max;
+	/* Minimum guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_min;
+	/* Maximum non-guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_max;
+} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_input {
+	/* FW Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of full action SRAM entries to be allocated */
+	uint16_t			 num_full_action;
+	/* Number of multicast groups to be allocated */
+	uint16_t			 num_mcg;
+	/* Number of Encap 8B entries to be allocated */
+	uint16_t			 num_encap_8b;
+	/* Number of Encap 16B entries to be allocated */
+	uint16_t			 num_encap_16b;
+	/* Number of Encap 64B entries to be allocated */
+	uint16_t			 num_encap_64b;
+	/* Number of SP SMAC entries to be allocated */
+	uint16_t			 num_sp_smac;
+	/* Number of SP SMAC IPv4 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv4;
+	/* Number of SP SMAC IPv6 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv6;
+	/* Number of Counter 64B entries to be allocated */
+	uint16_t			 num_counter_64b;
+	/* Number of NAT source ports to be allocated */
+	uint16_t			 num_nat_sport;
+	/* Number of NAT destination ports to be allocated */
+	uint16_t			 num_nat_dport;
+	/* Number of NAT source iPV4 addresses to be allocated */
+	uint16_t			 num_nat_s_ipv4;
+	/* Number of NAT destination IPV4 addresses to be allocated */
+	uint16_t			 num_nat_d_ipv4;
+} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_output {
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM free */
+typedef struct tf_session_sram_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM flush */
+typedef struct tf_session_sram_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Index to get */
+	uint32_t			 index;
+} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_output {
+	/* Size of the data read in bytes */
+	uint16_t			 size;
+	/* Data read */
+	uint8_t			  data[TF_BULK_RECV];
+} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM internal rule insert */
+typedef struct tf_em_internal_insert_input {
+	/* Firmware Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* strength */
+	uint16_t			 strength;
+	/* index to action */
+	uint32_t			 action_ptr;
+	/* index of em record */
+	uint32_t			 em_record_idx;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for EM internal rule insert */
+typedef struct tf_em_internal_insert_output {
+	/* EM record pointer index */
+	uint16_t			 rptr_index;
+	/* EM record offset 0~3 */
+	uint8_t			  rptr_entry;
+} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_input {
+	/* Session Id */
+	uint32_t			 tf_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* EM internal flow hanndle */
+	uint64_t			 flow_handle;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_delete_input_t) <= TF_MAX_REQ_SIZE);
+
+#endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
new file mode 100644
index 0000000..6bafae5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+int
+tf_open_session(struct tf                    *tfp,
+		struct tf_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tfp_calloc_parms alloc_parms;
+	unsigned int domain, bus, slot, device;
+	uint8_t fw_session_id;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH)
+		return -ENOTSUP;
+
+	/* Build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			PMD_DRV_LOG(ERR,
+				    "Session is already open, rc:%d\n",
+				    rc);
+		else
+			PMD_DRV_LOG(ERR,
+				    "Open message send failed, rc:%d\n",
+				    rc);
+
+		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session_info);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+
+	/* Allocate core data for the session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session->core_data = alloc_parms.mem_va;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	/* Initialize Session */
+	session->device_type = parms->device_type;
+
+	/* Construct the Session ID */
+	session->session_id.internal.domain = domain;
+	session->session_id.internal.bus = bus;
+	session->session_id.internal.device = device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%d\n",
+			    rc);
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	/* Return session ID */
+	parms->session_id = session->session_id;
+
+	PMD_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
new file mode 100644
index 0000000..69433ac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -0,0 +1,347 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_CORE_H_
+#define _TF_CORE_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdio.h>
+
+#include "tf_project.h"
+
+/**
+ * @file
+ *
+ * Truflow Core API Header File
+ */
+
+/********** BEGIN Truflow Core DEFINITIONS **********/
+
+/**
+ * direction
+ */
+enum tf_dir {
+	TF_DIR_RX,  /**< Receive */
+	TF_DIR_TX,  /**< Transmit */
+	TF_DIR_MAX
+};
+
+/********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
+
+/**
+ * @page general General
+ *
+ * @ref tf_open_session
+ *
+ * @ref tf_attach_session
+ *
+ * @ref tf_close_session
+ */
+
+
+/** Session Version defines
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ */
+#define TF_SESSION_VER_MAJOR  1   /**< Major Version */
+#define TF_SESSION_VER_MINOR  0   /**< Minor Version */
+#define TF_SESSION_VER_UPDATE 0   /**< Update Version */
+
+/** Session Name
+ *
+ * Name of the TruFlow control channel interface.  Expects
+ * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
+ */
+#define TF_SESSION_NAME_MAX       64
+
+#define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Sesssion ID define */
+
+/** Session Identifier
+ *
+ * Unique session identifier which includes PCIe bus info to
+ * distinguish the PF and session info to identify the associated
+ * TruFlow session. Session ID is constructed from the passed in
+ * ctrl_chan_name in tf_open_session() together with an allocated
+ * fw_session_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_id {
+	uint32_t id;
+	struct {
+		uint8_t domain;
+		uint8_t bus;
+		uint8_t device;
+		uint8_t fw_session_id;
+	} internal;
+};
+
+/** Session Version
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ *
+ * Please see the TF_VER_MAJOR/MINOR and UPDATE defines.
+ */
+struct tf_session_version {
+	uint8_t major;
+	uint8_t minor;
+	uint8_t update;
+};
+
+/** Session supported device types
+ *
+ */
+enum tf_device_type {
+	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
+	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_MAX     /**< Maximum   */
+};
+
+/** TruFlow Session Information
+ *
+ * Structure defining a TruFlow Session, also known as a Management
+ * session. This structure is initialized at time of
+ * tf_open_session(). It is passed to all of the TruFlow APIs as way
+ * to prescribe and isolate resources between different TruFlow ULP
+ * Applications.
+ */
+struct tf_session_info {
+	/**
+	 * TrueFlow Version. Used to control the structure layout when
+	 * sharing sessions. No guarantee that a secondary process
+	 * would come from the same version of an executable.
+	 * TruFlow initializes this variable on tf_open_session().
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	struct tf_session_version ver;
+	/**
+	 * will be STAILQ_ENTRY(tf_session_info) next
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	void                 *next;
+	/**
+	 * Session ID is a unique identifier for the session. TruFlow
+	 * initializes this variable during tf_open_session()
+	 * processing.
+	 *
+	 * Owner:  TruFlow
+	 * Access: Truflow & ULP
+	 */
+	union tf_session_id   session_id;
+	/**
+	 * Protects access to core_data. Lock is initialized and owned
+	 * by ULP. TruFlow can access the core_data without checking
+	 * the lock.
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	uint8_t               spin_lock;
+	/**
+	 * The core_data holds the TruFlow tf_session data
+	 * structure. This memory is allocated and owned by TruFlow on
+	 * tf_open_session().
+	 *
+	 * TruFlow uses this memory for session management control
+	 * until the session is closed by ULP. Access control is done
+	 * by the spin_lock which ULP controls ahead of TruFlow API
+	 * calls.
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	void                 *core_data;
+	/**
+	 * The core_data_sz_bytes specifies the size of core_data in
+	 * bytes.
+	 *
+	 * The size is set by TruFlow on tf_open_session().
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	uint32_t              core_data_sz_bytes;
+};
+
+/** TruFlow handle
+ *
+ * Contains a pointer to the session info. Allocated by ULP and passed
+ * to TruFlow using tf_open_session(). TruFlow will populate the
+ * session info at that time. Additional 'opens' can be done using
+ * same session_info by using tf_attach_session().
+ *
+ * It is expected that ULP allocates this memory as shared memory.
+ *
+ * NOTE: This struct must be within the BNXT PMD struct bnxt
+ *       (bp). This allows use of container_of() to get access to the PMD.
+ */
+struct tf {
+	struct tf_session_info *session;
+};
+
+
+/**
+ * tf_open_session parameters definition.
+ */
+struct tf_open_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	/** [in] shadow_copy
+	 *
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow to keep track of
+	 * resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+	/** [in/out] session_id
+	 *
+	 * Session_id is unique per session.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id.
+	 *
+	 * The session_id allows a session to be shared between devices.
+	 */
+	union tf_session_id session_id;
+	/** [in] device type
+	 *
+	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 */
+	enum tf_device_type device_type;
+};
+
+/**
+ * Opens a new TruFlow management session.
+ *
+ * TruFlow will allocate session specific memory, shared memory, to
+ * hold its session data. This data is private to TruFlow.
+ *
+ * Multiple PFs can share the same session. An association, refcount,
+ * between session and PFs is maintained within TruFlow. Thus, a PF
+ * can attach to an existing session, see tf_attach_session().
+ *
+ * No other TruFlow APIs will succeed unless this API is first called and
+ * succeeds.
+ *
+ * tf_open_session() returns a session id that can be used on attach.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ * [in] parms
+ *   Pointer to open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_open_session(struct tf *tfp,
+		    struct tf_open_session_parms *parms);
+
+struct tf_attach_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] attach_chan_name
+	 *
+	 * String containing name of attach channel interface to be
+	 * used for this session.
+	 *
+	 * The attach_chan_name must be given to a 2nd process after
+	 * the primary process has been created. This is the
+	 * ctrl_chan_name of the primary process and is used to find
+	 * the shared memory for the session that the attach is going
+	 * to use.
+	 */
+	char attach_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] session_id
+	 *
+	 * Session_id is unique per session. For Attach the session_id
+	 * should be the session_id that was returned on the first
+	 * open.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id
+	 * during tf_open_session().
+	 *
+	 * A reference count will be incremented on attach. A session
+	 * is first fully closed when reference count is zero by
+	 * calling tf_close_session().
+	 */
+	union tf_session_id session_id;
+};
+
+/**
+ * Attaches to an existing session. Used when more than one PF wants
+ * to share a single session. In that case all TruFlow management
+ * traffic will be sent to the TruFlow firmware using the 'PF' that
+ * did the attach not the session ctrl channel.
+ *
+ * Attach will increment a ref count as to manage the shared session data.
+ *
+ * [in] tfp, pointer to TF handle
+ * [in] parms, pointer to attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_attach_session(struct tf *tfp,
+		      struct tf_attach_session_parms *parms);
+
+/**
+ * Closes an existing session. Cleans up all hardware and firmware
+ * state associated with the TruFlow application session when the last
+ * PF associated with the session results in refcount to be zero.
+ *
+ * Returns success or failure code.
+ */
+int tf_close_session(struct tf *tfp);
+
+#endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
new file mode 100644
index 0000000..2b68681
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tfp.h"
+
+#include "tf_msg_common.h"
+#include "tf_msg.h"
+#include "hsi_struct_def_dpdk.h"
+#include "hwrm_tf.h"
+
+/**
+ * Sends session open request to TF Firmware
+ */
+int
+tf_msg_session_open(struct tf *tfp,
+		    char *ctrl_chan_name,
+		    uint8_t *fw_session_id)
+{
+	int rc;
+	struct hwrm_tf_session_open_input req = { 0 };
+	struct hwrm_tf_session_open_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_OPEN;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_id = resp.fw_session_id;
+
+	return rc;
+}
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int
+tf_msg_session_qcfg(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_QCFG,
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
new file mode 100644
index 0000000..20ebf2e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_H_
+#define _TF_MSG_H_
+
+#include "tf_rm.h"
+
+struct tf;
+
+/**
+ * Sends session open request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_id
+ *   Pointer to the fw_session_id that is allocated on firmware side
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_open(struct tf *tfp,
+			char *ctrl_chan_name,
+			uint8_t *fw_session_id);
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int tf_msg_session_qcfg(struct tf *tfp);
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_query *hw_query);
+
+#endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg_common.h b/drivers/net/bnxt/tf_core/tf_msg_common.h
new file mode 100644
index 0000000..7a4e825
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg_common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_COMMON_H_
+#define _TF_MSG_COMMON_H_
+
+/* Communication Mailboxes */
+#define TF_CHIMP_MB 0
+#define TF_KONG_MB  1
+
+/* Helper to fill in the parms structure */
+#define MSG_PREP(parms, mb, type, subtype, req, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_REQ(parms, mb, type, subtype, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size  = 0;				\
+		parms.req_data  = NULL;				\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_RESP(parms, mb, type, subtype, req) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = 0;				\
+		parms.resp_data = NULL;				\
+	} while (0)
+
+#endif /* _TF_MSG_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_project.h b/drivers/net/bnxt/tf_core/tf_project.h
new file mode 100644
index 0000000..ab5f113
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_project.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_PROJECT_H_
+#define _TF_PROJECT_H_
+
+/* Wh+ support enabled */
+#ifndef TF_SUPPORT_P4
+#define TF_SUPPORT_P4 1
+#endif
+
+/* Shadow DB Support */
+#ifndef TF_SHADOW
+#define TF_SHADOW 0
+#endif
+
+/* Shared memory for session */
+#ifndef TF_SHARED
+#define TF_SHARED 0
+#endif
+
+#endif /* _TF_PROJECT_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
new file mode 100644
index 0000000..160abac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_RESOURCES_H_
+#define _TF_RESOURCES_H_
+
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/** HW Resource types
+ */
+enum tf_resource_type_hw {
+	/* Common HW resources for all chip variants */
+	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+	TF_RESC_TYPE_HW_PROF_FUNC,
+	TF_RESC_TYPE_HW_PROF_TCAM,
+	TF_RESC_TYPE_HW_EM_PROF_ID,
+	TF_RESC_TYPE_HW_EM_REC,
+	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	TF_RESC_TYPE_HW_WC_TCAM,
+	TF_RESC_TYPE_HW_METER_PROF,
+	TF_RESC_TYPE_HW_METER_INST,
+	TF_RESC_TYPE_HW_MIRROR,
+	TF_RESC_TYPE_HW_UPAR,
+	/* Wh+/Brd2 specific HW resources */
+	TF_RESC_TYPE_HW_SP_TCAM,
+	/* Brd2/Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_L2_FUNC,
+	/* Brd3, Brd4 common HW resources */
+	TF_RESC_TYPE_HW_FKB,
+	/* Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_TBL_SCOPE,
+	TF_RESC_TYPE_HW_EPOCH0,
+	TF_RESC_TYPE_HW_EPOCH1,
+	TF_RESC_TYPE_HW_METADATA,
+	TF_RESC_TYPE_HW_CT_STATE,
+	TF_RESC_TYPE_HW_RANGE_PROF,
+	TF_RESC_TYPE_HW_RANGE_ENTRY,
+	TF_RESC_TYPE_HW_LAG_ENTRY,
+	TF_RESC_TYPE_HW_MAX
+};
+#endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
new file mode 100644
index 0000000..5164d6b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_resources.h"
+#include "tf_core.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * Resource query single entry
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Resource query array of HW entities
+ */
+struct tf_rm_hw_query {
+	/** array of HW resource entries */
+	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+};
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
new file mode 100644
index 0000000..c30ebbe
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SESSION_H_
+#define _TF_SESSION_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "tf_core.h"
+#include "tf_rm.h"
+
+/** Session defines
+ */
+#define TF_SESSIONS_MAX	          1          /** max # sessions */
+#define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
+
+/** Session
+ *
+ * Shared memory containing private TruFlow session information.
+ * Through this structure the session can keep track of resource
+ * allocations and (if so configured) any shadow copy of flow
+ * information.
+ *
+ * Memory is assigned to the Truflow instance by way of
+ * tf_open_session. Memory is allocated and owned by i.e. ULP.
+ *
+ * Access control to this shared memory is handled by the spin_lock in
+ * tf_session_info.
+ */
+struct tf_session {
+	/** TrueFlow Version. Used to control the structure layout
+	 * when sharing sessions. No guarantee that a secondary
+	 * process would come from the same version of an executable.
+	 */
+	struct tf_session_version ver;
+
+	/** Device type, provided by tf_open_session().
+	 */
+	enum tf_device_type device_type;
+
+	/** Session ID, allocated by FW on tf_open_session().
+	 */
+	union tf_session_id session_id;
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow Core to keep track
+	 * of resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+
+	/** 
+	 * Session Reference Count. To keep track of functions per
+	 * session the ref_count is incremented. There is also a
+	 * parallel TruFlow Firmware ref_count in case the TruFlow
+	 * Core goes away without informing the Firmware.
+	 */
+	uint8_t ref_count;
+
+	/** CRC32 seed table */
+#define TF_LKUP_SEED_MEM_SIZE 512
+	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+	/** Lookup3 init values */
+	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+};
+#endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
new file mode 100644
index 0000000..fb5c297
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -0,0 +1,163 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * see the individual elements.
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "tf_core.h"
+#include "tfp.h"
+#include "bnxt.h"
+#include "bnxt_hwrm.h"
+#include "tf_msg_common.h"
+
+/**
+ * Sends TruFlow msg to the TruFlow Firmware using
+ * a message specific HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_direct(struct tf *tfp,
+		    struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_direct(container_of(tfp,
+					       struct bnxt,
+					       tfp),
+					 use_kong_mb,
+					 parms->tf_type,
+					 parms->req_data,
+					 parms->req_size,
+					 parms->resp_data,
+					 parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Sends preformatted TruFlow msg to the TruFlow Firmware using
+ * the Truflow tunnel HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_tunneled(struct tf *tfp,
+		      struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_tunneled(container_of(tfp,
+						  struct bnxt,
+						  tfp),
+					   use_kong_mb,
+					   parms->tf_type,
+					   parms->tf_subtype,
+					   &parms->tf_resp_code,
+					   parms->req_data,
+					   parms->req_size,
+					   parms->resp_data,
+					   parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_calloc(struct tfp_calloc_parms *parms)
+{
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("tf",
+				    (parms->nitems * parms->size),
+				    parms->alignment);
+	if (parms->mem_va == NULL) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	parms->mem_pa = (void *)rte_mem_virt2iova(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/**
+ * Frees the memory space pointed to by the provided pointer. The
+ * pointer must have been returned from the tfp_calloc().
+ */
+void
+tfp_free(void *addr)
+{
+	rte_free(addr);
+}
+
+/**
+ * Copies n bytes from src memory to dest memory. The memory areas
+ * must not overlap.
+ */
+void
+tfp_memcpy(void *dest, void *src, size_t n)
+{
+	rte_memcpy(dest, src, n);
+}
+
+/**
+ * Used to initialize portable spin lock
+ */
+void
+tfp_spinlock_init(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_init(&parms->slock);
+}
+
+/**
+ * Used to lock portable spin lock
+ */
+void
+tfp_spinlock_lock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_lock(&parms->slock);
+}
+
+/**
+ * Used to unlock portable spin lock
+ */
+void
+tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_unlock(&parms->slock);
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
new file mode 100644
index 0000000..8d5e94e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* This header file defines the Portability structures and APIs for
+ * TruFlow.
+ */
+
+#ifndef _TFP_H_
+#define _TFP_H_
+
+#include <rte_spinlock.h>
+
+/** Spinlock
+ */
+struct tfp_spinlock_parms {
+	rte_spinlock_t slock;
+};
+
+/**
+ * @file
+ *
+ * TrueFlow Portability API Header File
+ */
+
+/** send message parameter definition
+ */
+struct tfp_send_msg_parms {
+	/**
+	 * [in] mailbox, specifying the Mailbox to send the command on.
+	 */
+	uint32_t  mailbox;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_type.
+	 */
+	uint16_t  tf_type;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_subtype.
+	 */
+	uint16_t  tf_subtype;
+	/**
+	 * [out] tf_resp_code, response code from the internal tlv
+	 *       message. Only supported on tunneled messages.
+	 */
+	uint32_t tf_resp_code;
+	/**
+	 * [out] size, number specifying the request size of the data in bytes
+	 */
+	uint32_t req_size;
+	/**
+	 * [in] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *req_data;
+	/**
+	 * [out] size, number specifying the response size of the data in bytes
+	 */
+	uint32_t resp_size;
+	/**
+	 * [out] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *resp_data;
+};
+
+/** calloc parameter definition
+ */
+struct tfp_calloc_parms {
+	/**
+	 * [in] nitems, number specifying number of items to allocate.
+	 */
+	size_t nitems;
+	/**
+	 * [in] size, number specifying the size of each memory item
+	 *      requested. Size is in bytes.
+	 */
+	size_t size;
+	/**
+	 * [in] alignment, number indicates byte alignment required. 0
+	 *      - don't care, 16 - 16 byte alignment, 4K - 4K alignment etc
+	 */
+	size_t alignment;
+	/**
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/**
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+};
+
+/**
+ * @page Portability
+ *
+ * @ref tfp_send_direct
+ * @ref tfp_send_msg_tunneled
+ *
+ * @ref tfp_calloc
+ * @ref tfp_free
+ * @ref tfp_memcpy
+ *
+ * @ref tfp_spinlock_init
+ * @ref tfp_spinlock_lock
+ * @ref tfp_spinlock_unlock
+ *
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_direct(struct tf *tfp,
+			struct tfp_send_msg_parms *parms);
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_tunneled(struct tf                 *tfp,
+			  struct tfp_send_msg_parms *parms);
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * NOTE: Also performs virt2phy address conversion by default thus is
+ * can be expensive to invoke.
+ *
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -ENOMEM        - No memory available
+ *   -EINVAL        - Parameter error
+ */
+int tfp_calloc(struct tfp_calloc_parms *parms);
+
+void tfp_free(void *addr);
+void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+#endif /* _TFP_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 05/33] net/bnxt: add initial tf core session close support
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (3 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 04/33] net/bnxt: add initial tf core session open Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 06/33] net/bnxt: add tf core session sram functions Venkat Duvvuru
                   ` (28 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session and resource support functions
- Add Truflow session close API and related message support functions
  for both session and hw resources

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/bitalloc.c     | 364 +++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h     | 119 ++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |  86 +++++++
 drivers/net/bnxt/tf_core/tf_msg.c       | 401 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  42 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |  24 +-
 drivers/net/bnxt/tf_core/tf_rm.h        | 113 +++++++++
 drivers/net/bnxt/tf_core/tf_session.h   |   1 +
 9 files changed, 1146 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 0686988..1b42c1f 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -48,6 +48,7 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
 
diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
new file mode 100644
index 0000000..fb4df9a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bitalloc.h"
+
+#define BITALLOC_MAX_LEVELS 6
+
+/* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
+static int
+ba_ffs(bitalloc_word_t v)
+{
+	int c; /* c will be the number of zero bits on the right plus 1 */
+
+	v &= -v;
+	c = v ? 32 : 0;
+
+	if (v & 0x0000FFFF)
+		c -= 16;
+	if (v & 0x00FF00FF)
+		c -= 8;
+	if (v & 0x0F0F0F0F)
+		c -= 4;
+	if (v & 0x33333333)
+		c -= 2;
+	if (v & 0x55555555)
+		c -= 1;
+
+	return c;
+}
+
+int
+ba_init(struct bitalloc *pool, int size)
+{
+	bitalloc_word_t *mem = (bitalloc_word_t *)pool;
+	int       i;
+
+	/* Initialize */
+	pool->size = 0;
+
+	if (size < 1 || size > BITALLOC_MAX_SIZE)
+		return -1;
+
+	/* Zero structure */
+	for (i = 0;
+	     i < (int)(BITALLOC_SIZEOF(size) / sizeof(bitalloc_word_t));
+	     i++)
+		mem[i] = 0;
+
+	/* Initialize */
+	pool->size = size;
+
+	/* Embed number of words of next level, after each level */
+	int words[BITALLOC_MAX_LEVELS];
+	int lev = 0;
+	int offset = 0;
+
+	words[0] = (size + 31) / 32;
+	while (words[lev] > 1) {
+		lev++;
+		words[lev] = (words[lev - 1] + 31) / 32;
+	}
+
+	while (lev) {
+		offset += words[lev];
+		pool->storage[offset++] = words[--lev];
+	}
+
+	/* Free the entire pool */
+	for (i = 0; i < size; i++)
+		ba_free(pool, i);
+
+	return 0;
+}
+
+static int
+ba_alloc_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int              index,
+		int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc = ba_ffs(storage[index]);
+	int       r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index * 32 + loc,
+				    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
+}
+
+static int
+ba_alloc_index_helper(struct bitalloc *pool,
+		      int              offset,
+		      int              words,
+		      unsigned int     size,
+		      int             *index,
+		      int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_alloc_index_helper(pool,
+					  offset + words + 1,
+					  storage[words],
+					  size * 32,
+					  index,
+					  clear);
+	else
+		r = 1; /* Check if already allocated */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? 0 : -1;
+		if (r == 0) {
+			*clear = 1;
+			pool->free_count--;
+		}
+	}
+
+	if (*clear) {
+		storage[*index] &= ~(1 << loc);
+		*clear = (storage[*index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_index(struct bitalloc *pool, int index)
+{
+	int clear = 0;
+	int index_copy = index;
+
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0)
+		return index;
+	else
+		return -1;
+}
+
+static int
+ba_inuse_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_inuse_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index);
+	else
+		r = 1; /* Check if in use */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1)
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+
+	return r;
+}
+
+int
+ba_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_inuse_helper(pool, 0, 1, 32, &index) == 0;
+}
+
+static int
+ba_free_helper(struct bitalloc *pool,
+	       int              offset,
+	       int              words,
+	       unsigned int     size,
+	       int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_free_helper(pool,
+				   offset + words + 1,
+				   storage[words],
+				   size * 32,
+				   index);
+	else
+		r = 1; /* Check if already free */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+		if (r == 0)
+			pool->free_count++;
+	}
+
+	if (r == 0)
+		storage[*index] |= (1 << loc);
+
+	return r;
+}
+
+int
+ba_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index);
+}
+
+int
+ba_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index) + 1;
+}
+
+int
+ba_free_count(struct bitalloc *pool)
+{
+	return (int)pool->free_count;
+}
+
+int ba_inuse_count(struct bitalloc *pool)
+{
+	return (int)(pool->size) - (int)(pool->free_count);
+}
+
+static int
+ba_find_next_helper(struct bitalloc *pool,
+		    int              offset,
+		    int              words,
+		    unsigned int     size,
+		    int             *index,
+		    int              free)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc, r, bottom = 0;
+
+	if (pool->size > size)
+		r = ba_find_next_helper(pool,
+					offset + words + 1,
+					storage[words],
+					size * 32,
+					index,
+					free);
+	else
+		bottom = 1; /* Bottom of tree */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (bottom) {
+		int bit_index = *index * 32;
+
+		loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc));
+		if (loc > 0) {
+			loc--;
+			r = (bit_index + loc);
+			if (r >= (int)pool->size)
+				r = -1;
+		} else {
+			/* Loop over array at bottom of tree */
+			r = -1;
+			bit_index += 32;
+			*index = *index + 1;
+			while ((int)pool->size > bit_index) {
+				loc = ba_ffs(~storage[*index]);
+
+				if (loc > 0) {
+					loc--;
+					r = (bit_index + loc);
+					if (r >= (int)pool->size)
+						r = -1;
+					break;
+				}
+				bit_index += 32;
+				*index = *index + 1;
+			}
+		}
+	}
+
+	if (r >= 0 && (free)) {
+		if (bottom)
+			pool->free_count++;
+		storage[*index] |= (1 << loc);
+	}
+
+	return r;
+}
+
+int
+ba_find_next_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 0);
+}
+
+int
+ba_find_next_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 1);
+}
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
new file mode 100644
index 0000000..563c853
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BITALLOC_H_
+#define _BITALLOC_H_
+
+#include <stdint.h>
+
+/* Bitalloc works on uint32_t as its word size */
+typedef uint32_t bitalloc_word_t;
+
+struct bitalloc {
+	bitalloc_word_t size;
+	bitalloc_word_t free_count;
+	bitalloc_word_t storage[1];
+};
+
+#define BA_L0(s) (((s) + 31) / 32)
+#define BA_L1(s) ((BA_L0(s) + 31) / 32)
+#define BA_L2(s) ((BA_L1(s) + 31) / 32)
+#define BA_L3(s) ((BA_L2(s) + 31) / 32)
+#define BA_L4(s) ((BA_L3(s) + 31) / 32)
+
+#define BITALLOC_SIZEOF(size)                                    \
+	(sizeof(struct bitalloc) *				 \
+	 (((sizeof(struct bitalloc) +				 \
+	    sizeof(struct bitalloc) - 1 +			 \
+	    (sizeof(bitalloc_word_t) *				 \
+	     ((BA_L0(size) - 1) +				 \
+	      ((BA_L0(size) == 1) ? 0 : (BA_L1(size) + 1)) +	 \
+	      ((BA_L1(size) == 1) ? 0 : (BA_L2(size) + 1)) +	 \
+	      ((BA_L2(size) == 1) ? 0 : (BA_L3(size) + 1)) +	 \
+	      ((BA_L3(size) == 1) ? 0 : (BA_L4(size) + 1)))))) / \
+	  sizeof(struct bitalloc)))
+
+#define BITALLOC_MAX_SIZE (32 * 32 * 32 * 32 * 32 * 32)
+
+/* The instantiation of a bitalloc looks a bit odd. Since a
+ * bit allocator has variable storage, we need a way to get a
+ * a pointer to a bitalloc structure that points to the correct
+ * amount of storage. We do this by creating an array of
+ * bitalloc where the first element in the array is the
+ * actual bitalloc base structure, and the remaining elements
+ * in the array provide the storage for it. This approach allows
+ * instances to be individual variables or members of larger
+ * structures.
+ */
+#define BITALLOC_INST(name, size)                      \
+	struct bitalloc name[(BITALLOC_SIZEOF(size) /  \
+			      sizeof(struct bitalloc))]
+
+/* Symbolic return codes */
+#define BA_SUCCESS           0
+#define BA_FAIL             -1
+#define BA_ENTRY_FREE        0
+#define BA_ENTRY_IN_USE      1
+#define BA_NO_ENTRY_FOUND   -1
+
+/**
+ * Initializates the bitallocator
+ *
+ * Returns 0 on success, -1 on failure.  Size is arbitrary up to
+ * BITALLOC_MAX_SIZE
+ */
+int ba_init(struct bitalloc *pool, int size);
+
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc(struct bitalloc *pool);
+int ba_alloc_index(struct bitalloc *pool, int index);
+
+/**
+ * Query a particular index in a pool to check if its in use.
+ *
+ * Returns -1 on invalid index, 1 if the index is allocated, 0 if it
+ * is free
+ */
+int ba_inuse(struct bitalloc *pool, int index);
+
+/**
+ * Variant of ba_inuse that frees the index if it is allocated, same
+ * return codes as ba_inuse
+ */
+int ba_inuse_free(struct bitalloc *pool, int index);
+
+/**
+ * Find next index that is in use, start checking at index 'idx'
+ *
+ * Returns next index that is in use on success, or
+ * -1 if no in use index is found
+ */
+int ba_find_next_inuse(struct bitalloc *pool, int idx);
+
+/**
+ * Variant of ba_find_next_inuse that also frees the next in use index,
+ * same return codes as ba_find_next_inuse
+ */
+int ba_find_next_inuse_free(struct bitalloc *pool, int idx);
+
+/**
+ * Multiple freeing of the same index has no negative side effects,
+ * but will return -1.  returns -1 on failure, 0 on success.
+ */
+int ba_free(struct bitalloc *pool, int index);
+
+/**
+ * Returns the pool's free count
+ */
+int ba_free_count(struct bitalloc *pool);
+
+/**
+ * Returns the pool's in use count
+ */
+int ba_inuse_count(struct bitalloc *pool);
+
+#endif /* _BITALLOC_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6bafae5..3c5d55d 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,10 +7,18 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
+#include "bitalloc.h"
 #include "bnxt.h"
 
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -141,5 +149,83 @@ tf_open_session(struct tf                    *tfp,
 	return rc;
 
  cleanup_close:
+	tf_close_session(tfp);
 	return -EINVAL;
 }
+
+int
+tf_attach_session(struct tf *tfp __rte_unused,
+		  struct tf_attach_session_parms *parms __rte_unused)
+{
+#if (TF_SHARED == 1)
+	int rc;
+
+	if (tfp == NULL)
+		return -EINVAL;
+
+	/* - Open the shared memory for the attach_chan_name
+	 * - Point to the shared session for this Device instance
+	 * - Check that session is valid
+	 * - Attach to the firmware so it can record there is more
+	 *   than one client of the session.
+	 */
+
+	if (tfp->session) {
+		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+			rc = tf_msg_session_attach(tfp,
+						   parms->ctrl_chan_name,
+						   parms->session_id);
+		}
+	}
+#endif /* TF_SHARED */
+	return -1;
+}
+
+int
+tf_close_session(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	struct tf_session *tfs;
+	union tf_session_id session_id;
+
+	if (tfp == NULL || tfp->session == NULL)
+		return -EINVAL;
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "Message send failed, rc:%d\n",
+				    rc);
+		}
+
+		/* Update the ref_count */
+		tfs->ref_count--;
+	}
+
+	session_id = tfs->session_id;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	PMD_DRV_LOG(INFO,
+		    "Session closed, session_id:%d\n",
+		    session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    session_id.internal.domain,
+		    session_id.internal.bus,
+		    session_id.internal.device,
+		    session_id.internal.fw_session_id);
+
+	return rc_close;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 2b68681..e05aea7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,82 @@
 #include "hwrm_tf.h"
 
 /**
+ * Endian converts min and max values from the HW response to the query
+ */
+#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
+	(query)->hw_query[index].min =                                       \
+		tfp_le_to_cpu_16(response. element ## _min);                 \
+	(query)->hw_query[index].max =                                       \
+		tfp_le_to_cpu_16(response. element ## _max);                 \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the alloc to the request
+ */
+#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
+	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(hw_entry[index].start);                     \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
+	hw_entry[index].start =                                              \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	hw_entry[index].stride =                                             \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
+ * Endian converts min and max values from the SRAM response to the
+ * query
+ */
+#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
+	(query)->sram_query[index].min =                                     \
+		tfp_le_to_cpu_16(response.element ## _min);                  \
+	(query)->sram_query[index].max =                                     \
+		tfp_le_to_cpu_16(response.element ## _max);                  \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the action (alloc) to
+ * the request
+ */
+#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
+	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(sram_entry[index].start);                   \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
+	sram_entry[index].start =                                            \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	sram_entry[index].stride =                                           \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
  * Sends session open request to TF Firmware
  */
 int
@@ -51,6 +127,45 @@ tf_msg_session_open(struct tf *tfp,
 }
 
 /**
+ * Sends session attach request to TF Firmware
+ */
+int
+tf_msg_session_attach(struct tf *tfp __rte_unused,
+		      char *ctrl_chan_name __rte_unused,
+		      uint8_t tf_fw_session_id __rte_unused)
+{
+	return -1;
+}
+
+/**
+ * Sends session close request to TF Firmware
+ */
+int
+tf_msg_session_close(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_close_input req = { 0 };
+	struct hwrm_tf_session_close_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_CLOSE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
  * Sends session query config request to TF Firmware
  */
 int
@@ -77,3 +192,289 @@ tf_msg_session_qcfg(struct tf *tfp)
 				 &parms);
 	return rc;
 }
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_query *query)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_qcaps_input req = { 0 };
+	struct tf_session_hw_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(query, 0, sizeof(*query));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
+			    resp, meter_inst);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
+			     struct tf_rm_entry *hw_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_alloc_input req = { 0 };
+	struct tf_session_hw_resc_alloc_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			   l2_ctx_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			   prof_func_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			   prof_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			   em_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
+			   em_record_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			   wc_tcam_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
+			   wc_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
+			   meter_profiles);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
+			   meter_inst);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
+			   mirrors);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
+			   upar);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
+			   sp_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
+			   l2_func);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
+			   flex_key_templ);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			   tbl_scope);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
+			   epoch0_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
+			   epoch1_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
+			   metadata);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
+			   ct_state);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			   range_prof);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			   range_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			   lag_tbl_entries);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
+			    meter_inst);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_free(struct tf *tfp,
+			    enum tf_dir dir,
+			    struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_HW_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 20ebf2e..da5ccf3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -30,6 +30,34 @@ int tf_msg_session_open(struct tf *tfp,
 			uint8_t *fw_session_id);
 
 /**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] fw_session_id
+ *   Pointer to the fw_session_id that is assigned to the session at
+ *   time of session open
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_attach(struct tf *tfp,
+			  char *ctrl_channel_name,
+			  uint8_t tf_fw_session_id);
+
+/**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_close(struct tf *tfp);
+
+/**
  * Sends session query config request to TF Firmware
  */
 int tf_msg_session_qcfg(struct tf *tfp);
@@ -41,4 +69,18 @@ int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
 				 enum tf_dir dir,
 				 struct tf_rm_hw_query *hw_query);
 
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int tf_msg_session_hw_resc_alloc(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_alloc *hw_alloc,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int tf_msg_session_hw_resc_free(struct tf *tfp,
+				enum tf_dir dir,
+				struct tf_rm_entry *hw_entry);
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 160abac..8dbb2f9 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,11 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -43,4 +38,23 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_LAG_ENTRY,
 	TF_RESC_TYPE_HW_MAX
 };
+
+/** HW Resource types
+ */
+enum tf_resource_type_sram {
+	TF_RESC_TYPE_SRAM_FULL_ACTION,
+	TF_RESC_TYPE_SRAM_MCG,
+	TF_RESC_TYPE_SRAM_ENCAP_8B,
+	TF_RESC_TYPE_SRAM_ENCAP_16B,
+	TF_RESC_TYPE_SRAM_ENCAP_64B,
+	TF_RESC_TYPE_SRAM_SP_SMAC,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	TF_RESC_TYPE_SRAM_COUNTER_64B,
+	TF_RESC_TYPE_SRAM_NAT_SPORT,
+	TF_RESC_TYPE_SRAM_NAT_DPORT,
+	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+	TF_RESC_TYPE_SRAM_MAX
+};
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5164d6b..57ce19b 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -8,10 +8,52 @@
 
 #include "tf_resources.h"
 #include "tf_core.h"
+#include "bitalloc.h"
 
 struct tf;
 struct tf_session;
 
+/* Internal macro to determine appropriate allocation pools based on
+ * DIRECTION parm, also performs error checking for DIRECTION parm. The
+ * SESSION_POOL and SESSION pointers are set appropriately upon
+ * successful return (the GLOBAL_POOL is used to globally manage
+ * resource allocation and the SESSION_POOL is used to track the
+ * resources that have been allocated to the session)
+ *
+ * parameters:
+ *   struct tfp        *tfp
+ *   enum tf_dir        direction
+ *   struct bitalloc  **session_pool
+ *   string             base_pool_name - used to form pointers to the
+ *					 appropriate bit allocation
+ *					 pools, both directions of the
+ *					 session pools must have same
+ *					 base name, for example if
+ *					 POOL_NAME is feat_pool: - the
+ *					 ptr's to the session pools
+ *					 are feat_pool_rx feat_pool_tx
+ *
+ *  int                  rc - return code
+ *			      0 - Success
+ *			     -1 - invalid DIRECTION parm
+ */
+#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
+		(rc) = 0;						\
+		if ((direction) == TF_DIR_RX) {				\
+			*(session_pool) = (tfs)->pool_name ## _RX;	\
+		} else if ((direction) == TF_DIR_TX) {			\
+			*(session_pool) = (tfs)->pool_name ## _TX;	\
+		} else {						\
+			rc = -1;					\
+		}							\
+	} while (0)
+
+#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _RX)
+
+#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _TX)
+
 /**
  * Resource query single entry
  */
@@ -23,6 +65,16 @@ struct tf_rm_query_entry {
 };
 
 /**
+ * Resource single entry
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
  * Resource query array of HW entities
  */
 struct tf_rm_hw_query {
@@ -30,4 +82,65 @@ struct tf_rm_hw_query {
 	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
 };
 
+/**
+ * Resource allocation array of HW entities
+ */
+struct tf_rm_hw_alloc {
+	/** array of HW resource entries */
+	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+};
+
+/**
+ * Resource query array of SRAM entities
+ */
+struct tf_rm_sram_query {
+	/** array of SRAM resource entries */
+	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource allocation array of SRAM entities
+ */
+struct tf_rm_sram_alloc {
+	/** array of SRAM resource entries */
+	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Initializes the Resource Manager and the associated database
+ * entries for HW and SRAM resources. Must be called before any other
+ * Resource Manager functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ */
+void tf_rm_init(struct tf *tfp);
+
+/**
+ * Allocates and validates both HW and SRAM resources per the NVM
+ * configuration. If any allocation fails all resources for the
+ * session is deallocated.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate_validate(struct tf *tfp);
+
+/**
+ * Closes the Resource Manager and frees all allocated resources per
+ * the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTEMPTY) if resources are not cleaned up before close
+ */
+int tf_rm_close(struct tf *tfp);
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c30ebbe..f845984 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -9,6 +9,7 @@
 #include <stdint.h>
 #include <stdlib.h>
 
+#include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 06/33] net/bnxt: add tf core session sram functions
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (4 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 05/33] net/bnxt: add initial tf core session close support Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 07/33] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
                   ` (27 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session resource support functionality
- Add TruFlow session hw flush capability as well as
  sram support functions.
- Add resource definitions for session pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/rand.c         |  47 ++++
 drivers/net/bnxt/tf_core/rand.h         |  36 +++
 drivers/net/bnxt/tf_core/tf_core.c      |   1 +
 drivers/net/bnxt/tf_core/tf_msg.c       | 344 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  37 +++
 drivers/net/bnxt/tf_core/tf_resources.h | 482 ++++++++++++++++++++++++++++++++
 7 files changed, 948 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 1b42c1f..d4c915a 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,7 @@ endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/rand.c b/drivers/net/bnxt/tf_core/rand.c
new file mode 100644
index 0000000..32028df
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+
+#include <stdio.h>
+#include <stdint.h>
+#include "rand.h"
+
+#define TF_RAND_LFSR_INIT_VALUE 0xACE1u
+
+uint16_t lfsr = TF_RAND_LFSR_INIT_VALUE;
+uint32_t bit;
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ *   uint16_t number
+ */
+uint16_t rand16(void)
+{
+	bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5)) & 1;
+	return lfsr = (lfsr >> 1) | (bit << 15);
+}
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ *   uint32_t number
+ */
+uint32_t rand32(void)
+{
+	return (rand16() << 16) | rand16();
+}
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ */
+void rand_init(void)
+{
+	lfsr = TF_RAND_LFSR_INIT_VALUE;
+	bit = 0;
+}
diff --git a/drivers/net/bnxt/tf_core/rand.h b/drivers/net/bnxt/tf_core/rand.h
new file mode 100644
index 0000000..31cd76e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+#ifndef __RAND_H__
+#define __RAND_H__
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ * uint16_t number
+ *
+ */
+uint16_t rand16(void);
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ * uint32_t number
+ *
+ */
+uint32_t rand32(void);
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ *
+ * Returns:
+ *
+ */
+void rand_init(void);
+
+#endif /* __RAND_H__ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3c5d55d..d82f746 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -12,6 +12,7 @@
 #include "tfp.h"
 #include "bitalloc.h"
 #include "bnxt.h"
+#include "rand.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e05aea7..4ce2bc5 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -478,3 +478,347 @@ tf_msg_session_hw_resc_free(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_flush(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_query *query __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_qcaps_input req = { 0 };
+	struct tf_session_sram_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
+			      full_action);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
+			      sp_smac_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
+			      sp_smac_ipv6);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
+			       struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_alloc_input req = { 0 };
+	struct tf_session_sram_resc_alloc_output resp;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(&resp, 0, sizeof(resp));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			     full_action);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
+			     mcg);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			     encap_8b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			     encap_16b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			     encap_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			     sp_smac);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			     req, sp_smac_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			     req, sp_smac_ipv6);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
+			     req, counter_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			     nat_sport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			     nat_dport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			     nat_s_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			     nat_d_ipv4);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
+			      resp, full_action);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			      resp, sp_smac_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			      resp, sp_smac_ipv6);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
+			      enum tf_dir dir,
+			      struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_flush(struct tf *tfp,
+			       enum tf_dir dir,
+			       struct tf_rm_entry *sram_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index da5ccf3..057de84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -83,4 +83,41 @@ int tf_msg_session_hw_resc_alloc(struct tf *tfp,
 int tf_msg_session_hw_resc_free(struct tf *tfp,
 				enum tf_dir dir,
 				struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int tf_msg_session_hw_resc_flush(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_query *sram_query);
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int tf_msg_session_sram_resc_alloc(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_alloc *sram_alloc,
+				   struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int tf_msg_session_sram_resc_free(struct tf *tfp,
+				  enum tf_dir dir,
+				  struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int tf_msg_session_sram_resc_flush(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_entry *sram_entry);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 8dbb2f9..05e131f 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,6 +6,487 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/* Common HW resources for all chip variants */
+#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
+					    * entries
+					    */
+#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
+#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
+					    * TCAM
+					    */
+#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
+					    * IDs
+					    */
+#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
+#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
+					    * TCAM. A slices is a WC TCAM entry.
+					    */
+#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
+#define TF_NUM_METER             1024      /* < Number of meter instances */
+#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
+#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
+
+/* Wh+/Brd2 specific HW resources */
+#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
+					    * entries
+					    */
+
+/* Brd2/Brd4 specific HW resources */
+#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
+
+
+/* Brd3, Brd4 common HW resources */
+#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
+					    * templates
+					    */
+
+/* Brd4 specific HW resources */
+#define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
+#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
+#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
+#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
+#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
+					    * States
+					    */
+#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
+#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
+#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
+
+/*
+ * Common for the Reserved Resource defines below:
+ *
+ * - HW Resources
+ *   For resources where a priority level plays a role, i.e. l2 ctx
+ *   tcam entries, both a number of resources and a begin/end pair is
+ *   required. The begin/end is used to assure TFLIB gets the correct
+ *   priority setting for that resource.
+ *
+ *   For EM records there is no priority required thus a number of
+ *   resources is sufficient.
+ *
+ *   Example, TCAM:
+ *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
+ *     0-63 as HW presents 0 as the highest priority entry.
+ *
+ * - SRAM Resources
+ *   Handled as regular resources as there is no priority required.
+ *
+ * Common for these resources is that they are handled per direction,
+ * rx/tx.
+ */
+
+/* HW Resources */
+
+/* L2 CTX */
+#define TF_RSVD_L2_CTXT_TCAM_RX                   64
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
+#define TF_RSVD_L2_CTXT_TCAM_TX                   960
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
+
+/* Profiler */
+#define TF_RSVD_PROF_FUNC_RX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
+#define TF_RSVD_PROF_FUNC_TX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
+
+#define TF_RSVD_PROF_TCAM_RX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
+#define TF_RSVD_PROF_TCAM_TX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
+
+/* EM Profiles IDs */
+#define TF_RSVD_EM_PROF_ID_RX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
+#define TF_RSVD_EM_PROF_ID_TX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
+
+/* EM Records */
+#define TF_RSVD_EM_REC_RX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
+#define TF_RSVD_EM_REC_TX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
+
+/* Wildcard */
+#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
+#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
+
+#define TF_RSVD_WC_TCAM_RX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_WC_TCAM_END_IDX_RX                63
+#define TF_RSVD_WC_TCAM_TX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_WC_TCAM_END_IDX_TX                63
+
+#define TF_RSVD_METER_PROF_RX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_PROF_END_IDX_RX             0
+#define TF_RSVD_METER_PROF_TX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_PROF_END_IDX_TX             0
+
+#define TF_RSVD_METER_INST_RX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_INST_END_IDX_RX             0
+#define TF_RSVD_METER_INST_TX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_INST_END_IDX_TX             0
+
+/* Mirror */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
+#define TF_RSVD_MIRROR_END_IDX_RX                 0
+#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
+#define TF_RSVD_MIRROR_END_IDX_TX                 0
+
+/* UPAR */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_UPAR_RX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
+#define TF_RSVD_UPAR_END_IDX_RX                   0
+#define TF_RSVD_UPAR_TX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
+#define TF_RSVD_UPAR_END_IDX_TX                   0
+
+/* Source Properties */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SP_TCAM_RX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_SP_TCAM_END_IDX_RX                0
+#define TF_RSVD_SP_TCAM_TX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_SP_TCAM_END_IDX_TX                0
+
+/* L2 Func */
+#define TF_RSVD_L2_FUNC_RX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
+#define TF_RSVD_L2_FUNC_END_IDX_RX                0
+#define TF_RSVD_L2_FUNC_TX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
+#define TF_RSVD_L2_FUNC_END_IDX_TX                0
+
+/* FKB */
+#define TF_RSVD_FKB_RX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
+#define TF_RSVD_FKB_END_IDX_RX                    0
+#define TF_RSVD_FKB_TX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
+#define TF_RSVD_FKB_END_IDX_TX                    0
+
+/* TBL Scope */
+#define TF_RSVD_TBL_SCOPE_RX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
+#define TF_RSVD_TBL_SCOPE_TX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
+
+/* EPOCH0 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH0_RX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH0_END_IDX_RX                 0
+#define TF_RSVD_EPOCH0_TX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH0_END_IDX_TX                 0
+
+/* EPOCH1 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH1_RX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH1_END_IDX_RX                 0
+#define TF_RSVD_EPOCH1_TX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH1_END_IDX_TX                 0
+
+/* METADATA */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_METADATA_RX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
+#define TF_RSVD_METADATA_END_IDX_RX               0
+#define TF_RSVD_METADATA_TX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
+#define TF_RSVD_METADATA_END_IDX_TX               0
+
+/* CT_STATE */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_CT_STATE_RX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
+#define TF_RSVD_CT_STATE_END_IDX_RX               0
+#define TF_RSVD_CT_STATE_TX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
+#define TF_RSVD_CT_STATE_END_IDX_TX               0
+
+/* RANGE_PROF */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_PROF_RX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
+#define TF_RSVD_RANGE_PROF_TX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
+
+/* RANGE_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_ENTRY_RX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
+#define TF_RSVD_RANGE_ENTRY_TX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
+
+/* LAG_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_LAG_ENTRY_RX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
+#define TF_RSVD_LAG_ENTRY_TX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
+
+
+/* SRAM - Resources
+ * Limited to the types that CFA provides.
+ */
+#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
+
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SRAM_MCG_RX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
+/* Multicast Group on TX is not supported */
+#define TF_RSVD_SRAM_MCG_TX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
+
+/* First encap of 8B RX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
+/* First encap of 8B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
+
+#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
+/* First encap of 16B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
+
+/* Encap of 64B on RX is not supported */
+#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
+/* First encap of 64B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_SP_SMAC_RX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
+#define TF_RSVD_SRAM_SP_SMAC_TX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
+
+/* SRAM SP IPV4 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
+
+/* SRAM SP IPV6 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
+/* Not yet supported fully in infra */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
+
+#define TF_RSVD_SRAM_COUNTER_64B_RX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_COUNTER_64B_TX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
+
+#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
+
+#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
+
+/* HW Resource Pool names */
+
+#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
+#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
+#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
+
+#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
+#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
+#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
+
+#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
+#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
+#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
+
+#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
+#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
+#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
+
+#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
+
+#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
+#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
+#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
+
+#define TF_METER_PROF_POOL_NAME           meter_prof_pool
+#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
+#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
+
+#define TF_METER_INST_POOL_NAME           meter_inst_pool
+#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
+#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
+
+#define TF_MIRROR_POOL_NAME               mirror_pool
+#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
+#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
+
+#define TF_UPAR_POOL_NAME                 upar_pool
+#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
+#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
+
+#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
+#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
+#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
+
+#define TF_FKB_POOL_NAME                  fkb_pool
+#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
+#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
+
+#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
+#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
+#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
+
+#define TF_L2_FUNC_POOL_NAME              l2_func_pool
+#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
+#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
+
+#define TF_EPOCH0_POOL_NAME               epoch0_pool
+#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
+#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
+
+#define TF_EPOCH1_POOL_NAME               epoch1_pool
+#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
+#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
+
+#define TF_METADATA_POOL_NAME             metadata_pool
+#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
+#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
+
+#define TF_CT_STATE_POOL_NAME             ct_state_pool
+#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
+#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
+
+#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
+#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
+#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
+
+#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
+#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
+#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
+
+#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
+#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
+#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
+
+/* SRAM Resource Pool names */
+#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
+#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
+#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
+
+#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
+#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
+#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
+
+#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
+#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
+#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
+
+#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
+#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
+#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
+
+#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
+#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
+#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
+
+#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
+#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
+#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
+
+#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
+#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
+#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
+
+#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
+#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
+#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
+
+#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
+#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
+#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
+
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
+
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
+
+/* Sw Resource Pool Names */
+
+#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
+#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
+#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
+
+
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -57,4 +538,5 @@ enum tf_resource_type_sram {
 	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
 	TF_RESC_TYPE_SRAM_MAX
 };
+
 #endif /* _TF_RESOURCES_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 07/33] net/bnxt: add initial tf core resource mgmt support
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (5 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 06/33] net/bnxt: add tf core session sram functions Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 08/33] net/bnxt: add resource manager functionality Venkat Duvvuru
                   ` (26 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow public API definitions for resources
  as well as RM infrastructure

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile             |    1 +
 drivers/net/bnxt/tf_core/tf_core.c    |   39 +
 drivers/net/bnxt/tf_core/tf_core.h    |  125 +++
 drivers/net/bnxt/tf_core/tf_rm.c      | 1731 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h      |  175 ++++
 drivers/net/bnxt/tf_core/tf_session.h |  208 +++-
 6 files changed, 2277 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index d4c915a..ed8b1e2 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index d82f746..c4f23bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -20,6 +20,28 @@ static inline uint32_t SWAP_WORDS32(uint32_t val32)
 		((val32 & 0xffff0000) >> 16));
 }
 
+static void tf_seeds_init(struct tf_session *session)
+{
+	int i;
+	uint32_t r;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
+		session->lkup_lkup3_init_cfg[TF_DIR_TX] = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+	}
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -109,6 +131,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
 	session->session_id.internal.domain = domain;
@@ -125,6 +148,16 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Adjust the Session with what firmware allowed us to get */
+	rc = tf_rm_allocate_validate(tfp);
+	if (rc) {
+		/* Log error */
+		goto cleanup_close;
+	}
+
+	/* Setup hash seeds */
+	tf_seeds_init(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -195,6 +228,12 @@ tf_close_session(struct tf *tfp)
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	/* Cleanup if we're last user of the session */
+	if (tfs->ref_count == 1) {
+		/* Cleanup any outstanding resources */
+		rc_close = tf_rm_close(tfp);
+	}
+
 	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 69433ac..3455d8f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -344,4 +344,129 @@ int tf_attach_session(struct tf *tfp,
  */
 int tf_close_session(struct tf *tfp);
 
+/**
+ * @page  ident Identity Management
+ *
+ * @ref tf_alloc_identifier
+ *
+ * @ref tf_free_identifier
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for Brd4
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/** Wh+/Brd2 Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+
+	/* HW */
+
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** Brd4 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** Brd4 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** Brd4 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** Brd4 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** Brd4 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** Brd4 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** Brd4 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** Brd4 only VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	/** Future - external pool of size0 entries */
+	TF_TBL_TYPE_EXT_0,
+	TF_TBL_TYPE_MAX
+};
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
new file mode 100644
index 0000000..56767e7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -0,0 +1,1731 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_rm.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_resources.h"
+#include "tf_msg.h"
+#include "bnxt.h"
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
+ *   enum tf_dir               dir         - Direction to process
+ *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
+ *                                           in the hw query structure
+ *   define                    def_value   - Define value to check against
+ *   uint32_t                 *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
+	if ((dir) == TF_DIR_RX) {					      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	} else {							      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	}								      \
+} while (0)
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
+ *   enum tf_dir                dir         - Direction to process
+ *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
+ *                                            in the hw query structure
+ *   define                     def_value   - Define value to check against
+ *   uint32_t                  *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
+	if ((dir) == TF_DIR_RX) {					       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	} else {							       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	}								       \
+} while (0)
+
+/**
+ * Internal macro to convert a reserved resource define name to be
+ * direction specific.
+ *
+ * Parameters:
+ *   enum tf_dir    dir         - Direction to process
+ *   string         type        - Type name to append RX or TX to
+ *   string         dtype       - Direction specific type
+ *
+ *
+ */
+#define TF_RESC_RSVD(dir, type, dtype) do {	\
+		if ((dir) == TF_DIR_RX)		\
+			(dtype) = type ## _RX;	\
+		else				\
+			(dtype) = type ## _TX;	\
+	} while (0)
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		break;
+	}
+	return "Invalid identifier";
+}
+
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
+{
+	switch (sram_type) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		return "Full action";
+	case TF_RESC_TYPE_SRAM_MCG:
+		return "MCG";
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		return "Encap 8B";
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		return "Encap 16B";
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		return "Encap 64B";
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		return "Source properties SMAC";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		return "Source properties SMAC IPv4";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		return "Source properties IPv6";
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		return "Counter 64B";
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		return "NAT source port";
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		return "NAT destination port";
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		return "NAT source IPv4";
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		return "NAT destination IPv4";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+/**
+ * Helper function to perform a SRAM HCAPI resource type lookup
+ * against the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
+		break;
+	case TF_RESC_TYPE_SRAM_MCG:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
+		break;
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
+ * Helper function to print all the SRAM resource qcaps errors
+ * reported in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the sram error flags created at time of the query check
+ */
+static void
+tf_rm_print_sram_qcaps_error(enum tf_dir dir,
+			     struct tf_rm_sram_query *sram_query,
+			     uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_sram_2_str(i),
+				    sram_query->sram_query[i].max,
+				    tf_rm_rsvd_sram_value(dir, i));
+	}
+}
+
+/**
+ * Performs a HW resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to HW Query data structure. Query holds what the firmware
+ *   offers of the HW resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
+			    enum tf_dir dir,
+			    uint32_t *error_flag)
+{
+	*error_flag = 0;
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_ENTRY,
+			     TF_RSVD_RANGE_ENTRY,
+			     error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Performs a SRAM resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to SRAM Query data structure. Query holds what the
+ *   firmware offers of the SRAM resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
+			      enum tf_dir dir,
+			      uint32_t *error_flag)
+{
+	*error_flag = 0;
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_FULL_ACTION,
+			       TF_RSVD_SRAM_FULL_ACTION,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_MCG,
+			       TF_RSVD_SRAM_MCG,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_8B,
+			       TF_RSVD_SRAM_ENCAP_8B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_16B,
+			       TF_RSVD_SRAM_ENCAP_16B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_64B,
+			       TF_RSVD_SRAM_ENCAP_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC,
+			       TF_RSVD_SRAM_SP_SMAC,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			       TF_RSVD_SRAM_SP_SMAC_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			       TF_RSVD_SRAM_SP_SMAC_IPV6,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_COUNTER_64B,
+			       TF_RSVD_SRAM_COUNTER_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_SPORT,
+			       TF_RSVD_SRAM_NAT_SPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_DPORT,
+			       TF_RSVD_SRAM_NAT_DPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+			       TF_RSVD_SRAM_NAT_S_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+			       TF_RSVD_SRAM_NAT_D_IPV4,
+			       error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Internal function to mark pool entries used.
+ */
+static void
+tf_rm_reserve_range(uint32_t count,
+		    uint32_t rsv_begin,
+		    uint32_t rsv_end,
+		    uint32_t max,
+		    struct bitalloc *pool)
+{
+	uint32_t i;
+
+	/* If no resources has been requested we mark everything
+	 * 'used'
+	 */
+	if (count == 0)	{
+		for (i = 0; i < max; i++)
+			ba_alloc_index(pool, i);
+	} else {
+		/* Support 2 main modes
+		 * Reserved range starts from bottom up (with
+		 * pre-reserved value or not)
+		 * - begin = 0 to end xx
+		 * - begin = 1 to end xx
+		 *
+		 * Reserved range starts from top down
+		 * - begin = yy to end max
+		 */
+
+		/* Bottom up check, start from 0 */
+		if (rsv_begin == 0) {
+			for (i = rsv_end + 1; i < max; i++)
+				ba_alloc_index(pool, i);
+		}
+
+		/* Bottom up check, start from 1 or higher OR
+		 * Top Down
+		 */
+		if (rsv_begin >= 1) {
+			/* Allocate from 0 until start */
+			for (i = 0; i < rsv_begin; i++)
+				ba_alloc_index(pool, i);
+
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the full action resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
+	uint16_t end = 0;
+
+	/* full action rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_RX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
+
+	/* full action tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_TX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the multicast group resources
+ * allocated that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
+	uint16_t end = 0;
+
+	/* multicast group rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_MCG_RX,
+			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
+
+	/* Multicast Group on TX is not supported */
+}
+
+/**
+ * Internal function to mark all the encap resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_encap(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
+	uint16_t end = 0;
+
+	/* encap 8b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_RX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
+
+	/* encap 8b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_TX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
+
+	/* encap 16b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_RX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
+
+	/* encap 16b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_TX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
+
+	/* Encap 64B not supported on RX */
+
+	/* Encap 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_64B_TX,
+			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_sp(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
+	uint16_t end = 0;
+
+	/* sp smac rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_RX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
+
+	/* sp smac tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_TX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+
+	/* SP SMAC IPv4 not supported on RX */
+
+	/* sp smac ipv4 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+
+	/* SP SMAC IPv6 not supported on RX */
+
+	/* sp smac ipv6 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the stat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_stats(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
+	uint16_t end = 0;
+
+	/* counter 64b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_RX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
+
+	/* counter 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_TX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the nat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_nat(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
+	uint16_t end = 0;
+
+	/* nat source port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_RX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
+
+	/* nat source port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_TX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
+
+	/* nat destination port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_RX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
+
+	/* nat destination port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_TX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+
+	/* nat source port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
+
+	/* nat source ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+
+	/* nat destination port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
+
+	/* nat destination ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
+}
+
+/**
+ * Internal function used to validate the HW allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_hw_alloc_validate(enum tf_dir dir,
+			struct tf_rm_hw_alloc *hw_alloc,
+			struct tf_rm_entry *hw_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				hw_alloc->hw_num[i],
+				hw_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to validate the SRAM allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
+			  struct tf_rm_sram_alloc *sram_alloc,
+			  struct tf_rm_entry *sram_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				sram_alloc->sram_num[i],
+				sram_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to mark all the HW resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_reserve_hw(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_l2_func(tfs);
+}
+
+/**
+ * Internal function used to mark all the SRAM resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_reserve_sram(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_sram_full_action(tfs);
+	tf_rm_rsvd_sram_mcg(tfs);
+	tf_rm_rsvd_sram_encap(tfs);
+	tf_rm_rsvd_sram_sp(tfs);
+	tf_rm_rsvd_sram_stats(tfs);
+	tf_rm_rsvd_sram_nat(tfs);
+}
+
+/**
+ * Internal function used to allocate and validate all HW resources.
+ */
+static int
+tf_rm_allocate_validate_hw(struct tf *tfp,
+			   enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_hw_query hw_query;
+	struct tf_rm_hw_alloc hw_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *hw_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		hw_entries = tfs->resc.rx.hw_entry;
+	else
+		hw_entries = tfs->resc.tx.hw_entry;
+
+	/* Query for Session HW Resources */
+	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		goto cleanup;
+	}
+
+	/* Post process HW capability */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
+		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
+
+	/* Allocate Session HW Resources */
+	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform HW allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Internal function used to allocate and validate all SRAM resources.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0  - Success
+ *   -1 - Internal error
+ */
+static int
+tf_rm_allocate_validate_sram(struct tf *tfp,
+			     enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_sram_query sram_query;
+	struct tf_rm_sram_alloc sram_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *sram_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a SRAM resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master SRAM Resource data base
+ *
+ * [in/out] flush_entries
+ *   Pruned SRAM Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master SRAM
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_sram_to_flush(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    struct tf_rm_entry *sram_entries,
+		    struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the sram resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_FULL_ACTION_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for RX direction */
+	if (dir == TF_DIR_RX) {
+		TF_RM_GET_POOLS_RX(tfs, &pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune TX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_8B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_16B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_SP_SMAC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_STATS_64B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_SPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_DPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_S_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_D_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
+}
+
+/**
+ * Helper function used to generate an error log for the SRAM types
+ * that needs to be flushed. The types should have been cleaned up
+ * ahead of invoking tf_close_session.
+ *
+ * [in] sram_entries
+ *   SRAM Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_sram_flush(enum tf_dir dir,
+		     struct tf_rm_entry *sram_entries)
+{
+	int i;
+
+	/* Walk the sram flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_sram_2_str(i));
+	}
+}
+
+void
+tf_rm_init(struct tf *tfp __rte_unused)
+{
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	/* This version is host specific and should be checked against
+	 * when attaching as there is no guarantee that a secondary
+	 * would run from same image version.
+	 */
+	tfs->ver.major = TF_SESSION_VER_MAJOR;
+	tfs->ver.minor = TF_SESSION_VER_MINOR;
+	tfs->ver.update = TF_SESSION_VER_UPDATE;
+
+	tfs->session_id.id = 0;
+	tfs->ref_count = 0;
+
+	/* Initialization of Table Scopes */
+	/* ll_init(&tfs->tbl_scope_ll); */
+
+	/* Initialization of HW and SRAM resource DB */
+	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
+
+	/* Initialization of HW Resource Pools */
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/* Initialization of SRAM Resource Pools
+	 * These pools are set to the TFLIB defined MAX sizes not
+	 * AFM's HW max as to limit the memory consumption
+	 */
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		TF_RSVD_SRAM_FULL_ACTION_RX);
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		TF_RSVD_SRAM_FULL_ACTION_TX);
+	/* Only Multicast Group on RX is supported */
+	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
+		TF_RSVD_SRAM_MCG_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_8B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_8B_TX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_16B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_16B_TX);
+	/* Only Encap 64B on TX is supported */
+	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_64B_TX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		TF_RSVD_SRAM_SP_SMAC_RX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_TX);
+	/* Only SP SMAC IPv4 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+	/* Only SP SMAC IPv6 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
+		TF_RSVD_SRAM_COUNTER_64B_RX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_COUNTER_64B_TX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_SPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_SPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_DPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_DPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_S_IPV4_TX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/* Initialization of pools local to TF Core */
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+}
+
+int
+tf_rm_allocate_validate(struct tf *tfp)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = tf_rm_allocate_validate_hw(tfp, i);
+		if (rc)
+			return rc;
+		rc = tf_rm_allocate_validate_sram(tfp, i);
+		if (rc)
+			return rc;
+	}
+
+	/* With both HW and SRAM allocated and validated we can
+	 * 'scrub' the reservation on the pools.
+	 */
+	tf_rm_reserve_hw(tfp);
+	tf_rm_reserve_sram(tfp);
+
+	return rc;
+}
+
+int
+tf_rm_close(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	int i;
+	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *sram_entries;
+	struct tf_rm_entry *sram_flush_entries;
+	struct tf_session *tfs __rte_unused =
+		(struct tf_session *)(tfp->session->core_data);
+
+	struct tf_rm_db flush_resc = tfs->resc;
+
+	/* On close it is assumed that the session has already cleaned
+	 * up all its resources, individually, while destroying its
+	 * flows. No checking is performed thus the behavior is as
+	 * follows.
+	 *
+	 * Session RM will signal FW to release session resources. FW
+	 * will perform invalidation of all the allocated entries
+	 * (assures any outstanding resources has been cleared, then
+	 * free the FW RM instance.
+	 *
+	 * Session will then be freed by tf_close_session() thus there
+	 * is no need to clean each resource pool as the whole session
+	 * is going away.
+	 */
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		if (i == TF_DIR_RX) {
+			hw_entries = tfs->resc.rx.hw_entry;
+			sram_entries = tfs->resc.rx.sram_entry;
+			sram_flush_entries = flush_resc.rx.sram_entry;
+		} else {
+			hw_entries = tfs->resc.tx.hw_entry;
+			sram_entries = tfs->resc.tx.sram_entry;
+			sram_flush_entries = flush_resc.tx.sram_entry;
+		}
+
+		/* Check for any not previously freed SRAM resources
+		 * and flush if required.
+		 */
+		rc = tf_rm_sram_to_flush(tfs,
+					 i,
+					 sram_entries,
+					 sram_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_sram_flush(i, sram_flush_entries);
+
+			rc = tf_msg_session_sram_resc_flush(tfp,
+							    i,
+							    sram_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
+		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, HW free failed\n",
+				    tf_dir_2_str(i));
+		}
+
+		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, SRAM free failed\n",
+				    tf_dir_2_str(i));
+		}
+	}
+
+	return rc_close;
+}
+
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type)
+{
+	int rc = 0;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
+		break;
+	case TF_TBL_TYPE_UPAR:
+		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
+		break;
+	case TF_TBL_TYPE_METADATA:
+		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
+		break;
+	case TF_TBL_TYPE_LAG:
+		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		*hcapi_type = -1;
+		rc = -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index)
+{
+	int rc;
+	struct tf_rm_resc *resc;
+	uint32_t hcapi_type;
+	uint32_t base_index;
+
+	if (dir == TF_DIR_RX)
+		resc = &tfs->resc.rx;
+	else if (dir == TF_DIR_TX)
+		resc = &tfs->resc.tx;
+	else
+		return -EOPNOTSUPP;
+
+	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
+	if (rc)
+		return -1;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+	case TF_TBL_TYPE_MCAST_GROUPS:
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+	case TF_TBL_TYPE_ACT_STATS_64:
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		base_index = resc->sram_entry[hcapi_type].start;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+	case TF_TBL_TYPE_METER_PROF:
+	case TF_TBL_TYPE_METER_INST:
+	case TF_TBL_TYPE_UPAR:
+	case TF_TBL_TYPE_EPOCH0:
+	case TF_TBL_TYPE_EPOCH1:
+	case TF_TBL_TYPE_METADATA:
+	case TF_TBL_TYPE_CT_STATE:
+	case TF_TBL_TYPE_RANGE_PROF:
+	case TF_TBL_TYPE_RANGE_ENTRY:
+	case TF_TBL_TYPE_LAG:
+		base_index = resc->hw_entry[hcapi_type].start;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	switch (c_type) {
+	case TF_RM_CONVERT_RM_BASE:
+		*convert_index = index - base_index;
+		break;
+	case TF_RM_CONVERT_ADD_BASE:
+		*convert_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 57ce19b..e69d443 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -107,6 +107,54 @@ struct tf_rm_sram_alloc {
 };
 
 /**
+ * Resource Manager arrays for a single direction
+ */
+struct tf_rm_resc {
+	/** array of HW resource entries */
+	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
+	/** array of SRAM resource entries */
+	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource Manager Database
+ */
+struct tf_rm_db {
+	struct tf_rm_resc rx;
+	struct tf_rm_resc tx;
+};
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function used to convert HW HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+
+/**
+ * Helper function used to convert SRAM HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+
+/**
  * Initializes the Resource Manager and the associated database
  * entries for HW and SRAM resources. Must be called before any other
  * Resource Manager functions.
@@ -143,4 +191,131 @@ int tf_rm_allocate_validate(struct tf *tfp);
  *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
 int tf_rm_close(struct tf *tfp);
+
+#if (TF_SHADOW == 1)
+/**
+ * Initializes Shadow DB of configuration elements
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * Returns:
+ *  0  - Success
+ */
+int tf_rm_shadow_db_init(struct tf_session *tfs);
+#endif /* TF_SHADOW */
+
+/**
+ * Perform a Session Pool lookup using the Tcam table type.
+ *
+ * Function will print error msg if tcam type is unsupported or lookup
+ * failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool);
+
+/**
+ * Perform a Session Pool lookup using the Table type.
+ *
+ * Function will print error msg if table type is unsupported or
+ * lookup failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool);
+
+/**
+ * Converts the TF Table Type to internal HCAPI_TYPE
+ *
+ * [in] type
+ *   Type to be converted
+ *
+ * [in/out] hcapi_type
+ *   Converted type
+ *
+ * Returns:
+ *  0           - Success will set the *hcapi_type
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type);
+
+/**
+ * TF RM Convert of index methods.
+ */
+enum tf_rm_convert_type {
+	/** Adds the base of the Session Pool to the index */
+	TF_RM_CONVERT_ADD_BASE,
+	/** Removes the Session Pool base from the index */
+	TF_RM_CONVERT_RM_BASE
+};
+
+/**
+ * Provides conversion of the Table Type index in relation to the
+ * Session Pool base.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] c_type
+ *   Type of conversion to perform
+ *
+ * [in] index
+ *   Index to be converted
+ *
+ * [in/out]  convert_index
+ *   Pointer to the converted index
+ */
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index);
+
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index f845984..34b6c41 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -68,7 +68,7 @@ struct tf_session {
 	 */
 	bool shadow_copy;
 
-	/** 
+	/**
 	 * Session Reference Count. To keep track of functions per
 	 * session the ref_count is incremented. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
@@ -76,11 +76,215 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Session HW and SRAM resources */
+	struct tf_rm_db resc;
+
+	/* Session HW resource pools */
+
+	/** RX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/** RX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	/** TX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+
+	/** RX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	/** TX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+
+	/** RX EM Profile ID Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	/** TX EM Key Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/** RX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	/** TX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records are not controlled by way of a pool */
+
+	/** RX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	/** TX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+
+	/** RX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	/** TX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+
+	/** RX Meter Instance Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	/** TX Meter Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+
+	/** RX Mirror Configuration Pool*/
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	/** RX Mirror Configuration Pool */
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+
+	/** RX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	/** TX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	/** RX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	/** TX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	/** RX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	/** TX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	/** RX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	/** TX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+
+	/** RX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	/** TX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+
+	/** RX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	/** TX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+
+	/** RX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	/** TX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+
+	/** RX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	/** TX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+
+	/** RX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	/** TX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+
+	/** RX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	/** TX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+
+	/** RX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	/** TX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
+
+	/* Session SRAM pools */
+
+	/** RX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		      TF_RSVD_SRAM_FULL_ACTION_RX);
+	/** TX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		      TF_RSVD_SRAM_FULL_ACTION_TX);
+
+	/** RX Multicast Group Pool, only RX is supported */
+	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
+		      TF_RSVD_SRAM_MCG_RX);
+
+	/** RX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_8B_RX);
+	/** TX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_8B_TX);
+
+	/** RX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_16B_RX);
+	/** TX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_16B_TX);
+
+	/** TX Encap 64B Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_64B_TX);
+
+	/** RX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		      TF_RSVD_SRAM_SP_SMAC_RX);
+	/** TX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_TX);
+
+	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+
+	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+
+	/** RX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_COUNTER_64B_RX);
+	/** TX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_COUNTER_64B_TX);
+
+	/** RX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_SPORT_RX);
+	/** TX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_SPORT_TX);
+
+	/** RX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_DPORT_RX);
+	/** TX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_DPORT_TX);
+
+	/** RX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	/** TX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
+
+	/** RX NAT Destination IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	/** TX NAT IPv4 Destination Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/**
+	 * Pools not allocated from HCAPI RM
+	 */
+
+	/** RX L2 Ctx Remap ID  Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 Ctx Remap ID Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
 	/** CRC32 seed table */
 #define TF_LKUP_SEED_MEM_SIZE 512
 	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 };
+
 #endif /* _TF_SESSION_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 08/33] net/bnxt: add resource manager functionality
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (6 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 07/33] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 09/33] net/bnxt: add tf core identifier support Venkat Duvvuru
                   ` (25 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow RM functionality for resource handling
- Update the TruFlow Resource Manager (RM) with resource
  support functions for debugging as well as resource cleanup.
- Add support for Internal and external pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c    |   14 +
 drivers/net/bnxt/tf_core/tf_core.h    |   26 +
 drivers/net/bnxt/tf_core/tf_rm.c      | 1718 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h |   10 +
 drivers/net/bnxt/tf_core/tf_tbl.h     |   43 +
 5 files changed, 1735 insertions(+), 76 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index c4f23bd..259ffa2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -148,6 +148,20 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Shadow DB configuration */
+	if (parms->shadow_copy) {
+		/* Ignore shadow_copy setting */
+		session->shadow_copy = 0;/* parms->shadow_copy; */
+#if (TF_SHADOW == 1)
+		rc = tf_rm_shadow_db_init(tfs);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%d",
+				    rc);
+		/* Add additional processing */
+#endif /* TF_SHADOW */
+	}
+
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3455d8f..16c8251 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -30,6 +30,32 @@ enum tf_dir {
 	TF_DIR_MAX
 };
 
+/**
+ * External pool size
+ *
+ * Defines a single pool of external action records of
+ * fixed size.  Currently, this is an index.
+ */
+#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
+
+/**
+ *  External pool entry count
+ *
+ *  Defines the number of entries in the external action pool
+ */
+#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
+
+/**
+ * Number of external pools
+ */
+#define TF_EXT_POOL_CNT_MAX 1
+
+/**
+ * External pool Id
+ */
+#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
+#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 56767e7..a5e96f29 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -104,9 +104,82 @@ const char
 	case TF_IDENT_TYPE_L2_FUNC:
 		return "l2_func";
 	default:
-		break;
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
+{
+	switch (hw_type) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		return "L2 ctxt tcam";
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		return "Profile Func";
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		return "Profile tcam";
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		return "EM profile id";
+	case TF_RESC_TYPE_HW_EM_REC:
+		return "EM record";
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		return "WC tcam profile id";
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		return "WC tcam";
+	case TF_RESC_TYPE_HW_METER_PROF:
+		return "Meter profile";
+	case TF_RESC_TYPE_HW_METER_INST:
+		return "Meter instance";
+	case TF_RESC_TYPE_HW_MIRROR:
+		return "Mirror";
+	case TF_RESC_TYPE_HW_UPAR:
+		return "UPAR";
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		return "Source properties tcam";
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		return "L2 Function";
+	case TF_RESC_TYPE_HW_FKB:
+		return "FKB";
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		return "Table scope";
+	case TF_RESC_TYPE_HW_EPOCH0:
+		return "EPOCH0";
+	case TF_RESC_TYPE_HW_EPOCH1:
+		return "EPOCH1";
+	case TF_RESC_TYPE_HW_METADATA:
+		return "Metadata";
+	case TF_RESC_TYPE_HW_CT_STATE:
+		return "Connection tracking state";
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		return "Range profile";
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		return "Range entry";
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		return "LAG";
+	default:
+		return "Invalid identifier";
 	}
-	return "Invalid identifier";
 }
 
 const char
@@ -145,6 +218,93 @@ const char
 }
 
 /**
+ * Helper function to perform a HW HCAPI resource type lookup against
+ * the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_REC:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_INST:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
+		break;
+	case TF_RESC_TYPE_HW_MIRROR:
+		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
+		break;
+	case TF_RESC_TYPE_HW_UPAR:
+		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
+		break;
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_FKB:
+		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
+		break;
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH0:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH1:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
+		break;
+	case TF_RESC_TYPE_HW_METADATA:
+		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
+		break;
+	case TF_RESC_TYPE_HW_CT_STATE:
+		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
+		break;
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
  * Helper function to perform a SRAM HCAPI resource type lookup
  * against the reserved value of the same static type.
  *
@@ -205,6 +365,36 @@ tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
 }
 
 /**
+ * Helper function to print all the HW resource qcaps errors reported
+ * in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the hw error flags created at time of the query check
+ */
+static void
+tf_rm_print_hw_qcaps_error(enum tf_dir dir,
+			   struct tf_rm_hw_query *hw_query,
+			   uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_hw_2_str(i),
+				    hw_query->hw_query[i].max,
+				    tf_rm_rsvd_hw_value(dir, i));
+	}
+}
+
+/**
  * Helper function to print all the SRAM resource qcaps errors
  * reported in the error_flag.
  *
@@ -264,12 +454,139 @@ tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
 			    uint32_t *error_flag)
 {
 	*error_flag = 0;
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+			     TF_RSVD_L2_CTXT_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_FUNC,
+			     TF_RSVD_PROF_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_TCAM,
+			     TF_RSVD_PROF_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_PROF_ID,
+			     TF_RSVD_EM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_REC,
+			     TF_RSVD_EM_REC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+			     TF_RSVD_WC_TCAM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM,
+			     TF_RSVD_WC_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_PROF,
+			     TF_RSVD_METER_PROF,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_INST,
+			     TF_RSVD_METER_INST,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_MIRROR,
+			     TF_RSVD_MIRROR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_UPAR,
+			     TF_RSVD_UPAR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_SP_TCAM,
+			     TF_RSVD_SP_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_FUNC,
+			     TF_RSVD_L2_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_FKB,
+			     TF_RSVD_FKB,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_TBL_SCOPE,
+			     TF_RSVD_TBL_SCOPE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH0,
+			     TF_RSVD_EPOCH0,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH1,
+			     TF_RSVD_EPOCH1,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METADATA,
+			     TF_RSVD_METADATA,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_CT_STATE,
+			     TF_RSVD_CT_STATE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_PROF,
+			     TF_RSVD_RANGE_PROF,
+			     error_flag);
+
 	TF_RM_CHECK_HW_ALLOC(query,
 			     dir,
 			     TF_RESC_TYPE_HW_RANGE_ENTRY,
 			     TF_RSVD_RANGE_ENTRY,
 			     error_flag);
 
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_LAG_ENTRY,
+			     TF_RSVD_LAG_ENTRY,
+			     error_flag);
+
 	if (*error_flag != 0)
 		return -ENOMEM;
 
@@ -434,26 +751,584 @@ tf_rm_reserve_range(uint32_t count,
 			for (i = 0; i < rsv_begin; i++)
 				ba_alloc_index(pool, i);
 
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the profile tcam and profile func
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
+	uint32_t end = 0;
+
+	/* profile func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
+
+	/* profile func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_PROF_TCAM;
+
+	/* profile tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
+
+	/* profile tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the em profile id allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_em_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
+	uint32_t end = 0;
+
+	/* em prof id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
+
+	/* em prof id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the wildcard tcam and profile id
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_wc(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
+	uint32_t end = 0;
+
+	/* wc profile id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
+
+	/* wc profile id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_WC_TCAM;
+
+	/* wc tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_RX);
+
+	/* wc tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the meter resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_meter(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
+	uint32_t end = 0;
+
+	/* meter profiles rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_RX);
+
+	/* meter profiles tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_METER_INST;
+
+	/* meter rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_RX);
+
+	/* meter tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the mirror resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_mirror(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
+	uint32_t end = 0;
+
+	/* mirror rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_RX);
+
+	/* mirror tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the upar resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_upar(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_UPAR;
+	uint32_t end = 0;
+
+	/* upar rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_RX);
+
+	/* upar tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp tcam resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
+	uint32_t end = 0;
+
+	/* sp tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_RX);
+
+	/* sp tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the fkb resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_fkb(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_FKB;
+	uint32_t end = 0;
+
+	/* fkb rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_RX);
+
+	/* fkb tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the tbld scope resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
+	uint32_t end = 0;
+
+	/* tbl scope rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
+
+	/* tbl scope tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 epoch resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_epoch(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
+	uint32_t end = 0;
+
+	/* epoch0 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_RX);
+
+	/* epoch0 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_EPOCH1;
+
+	/* epoch1 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_RX);
+
+	/* epoch1 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the metadata resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_metadata(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METADATA;
+	uint32_t end = 0;
+
+	/* metadata rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_RX);
+
+	/* metadata tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the ct state resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_ct_state(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
+	uint32_t end = 0;
+
+	/* ct state rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_RX);
+
+	/* ct state tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
+ * Internal function to mark all the range resources allocated that
+ * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+tf_rm_rsvd_range(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
 	uint32_t end = 0;
 
-	/* l2 ctxt rx direction */
+	/* range profile rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -461,10 +1336,10 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
 
-	/* l2 ctxt tx direction */
+	/* range profile tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -472,21 +1347,45 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
+
+	/* range entry rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
+
+	/* range entry tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 func resources allocated that
+ * Internal function to mark all the lag resources allocated that
  * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
+tf_rm_rsvd_lag_entry(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
 	uint32_t end = 0;
 
-	/* l2 func rx direction */
+	/* lag entry rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -494,10 +1393,10 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
 
-	/* l2 func tx direction */
+	/* lag entry tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -505,8 +1404,8 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
 }
 
 /**
@@ -909,7 +1808,21 @@ tf_rm_reserve_hw(struct tf *tfp)
 	 * used except the resources that Truflow took ownership off.
 	 */
 	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_prof(tfs);
+	tf_rm_rsvd_em_prof(tfs);
+	tf_rm_rsvd_wc(tfs);
+	tf_rm_rsvd_mirror(tfs);
+	tf_rm_rsvd_meter(tfs);
+	tf_rm_rsvd_upar(tfs);
+	tf_rm_rsvd_sp_tcam(tfs);
 	tf_rm_rsvd_l2_func(tfs);
+	tf_rm_rsvd_fkb(tfs);
+	tf_rm_rsvd_tbl_scope(tfs);
+	tf_rm_rsvd_epoch(tfs);
+	tf_rm_rsvd_metadata(tfs);
+	tf_rm_rsvd_ct_state(tfs);
+	tf_rm_rsvd_range(tfs);
+	tf_rm_rsvd_lag_entry(tfs);
 }
 
 /**
@@ -972,6 +1885,7 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
 			tf_dir_2_str(dir),
 			error_flag);
+		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
 
@@ -1032,65 +1946,388 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	struct tf_rm_entry *sram_entries;
 	uint32_t error_flag;
 
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a HW resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master HW Resource database
+ *
+ * [in/out] flush_entries
+ *   Pruned HW Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master HW
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_hw_to_flush(struct tf_session *tfs,
+		  enum tf_dir dir,
+		  struct tf_rm_entry *hw_entries,
+		  struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the hw resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_CTXT_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_INST_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_MIRROR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_UPAR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SP_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_FKB_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
+		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_TBL_SCOPE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
+	} else {
+		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+			    tf_dir_2_str(dir),
+			    free_cnt,
+			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
+		flush_rc = 1;
 	}
 
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
-			tf_dir_2_str(dir),
-			error_flag);
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH0_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH1_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METADATA_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_CT_STATE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	return 0;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
- cleanup:
-	return -1;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_LAG_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
 }
 
 /**
@@ -1335,6 +2572,32 @@ tf_rm_sram_to_flush(struct tf_session *tfs,
 }
 
 /**
+ * Helper function used to generate an error log for the HW types that
+ * needs to be flushed. The types should have been cleaned up ahead of
+ * invoking tf_close_session.
+ *
+ * [in] hw_entries
+ *   HW Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_hw_flush(enum tf_dir dir,
+		   struct tf_rm_entry *hw_entries)
+{
+	int i;
+
+	/* Walk the hw flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_hw_2_str(i));
+	}
+}
+
+/**
  * Helper function used to generate an error log for the SRAM types
  * that needs to be flushed. The types should have been cleaned up
  * ahead of invoking tf_close_session.
@@ -1386,6 +2649,53 @@ tf_rm_init(struct tf *tfp __rte_unused)
 	/* Initialization of HW Resource Pools */
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records should not be controlled by way of a pool */
+
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
 
 	/* Initialization of SRAM Resource Pools
 	 * These pools are set to the TFLIB defined MAX sizes not
@@ -1476,6 +2786,7 @@ tf_rm_close(struct tf *tfp)
 	int rc_close = 0;
 	int i;
 	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *hw_flush_entries;
 	struct tf_rm_entry *sram_entries;
 	struct tf_rm_entry *sram_flush_entries;
 	struct tf_session *tfs __rte_unused =
@@ -1501,14 +2812,41 @@ tf_rm_close(struct tf *tfp)
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		if (i == TF_DIR_RX) {
 			hw_entries = tfs->resc.rx.hw_entry;
+			hw_flush_entries = flush_resc.rx.hw_entry;
 			sram_entries = tfs->resc.rx.sram_entry;
 			sram_flush_entries = flush_resc.rx.sram_entry;
 		} else {
 			hw_entries = tfs->resc.tx.hw_entry;
+			hw_flush_entries = flush_resc.tx.hw_entry;
 			sram_entries = tfs->resc.tx.sram_entry;
 			sram_flush_entries = flush_resc.tx.sram_entry;
 		}
 
+		/* Check for any not previously freed HW resources and
+		 * flush if required.
+		 */
+		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering HW resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_hw_flush(i, hw_flush_entries);
+			rc = tf_msg_session_hw_resc_flush(tfp,
+							  i,
+							  hw_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
 		/* Check for any not previously freed SRAM resources
 		 * and flush if required.
 		 */
@@ -1560,6 +2898,234 @@ tf_rm_close(struct tf *tfp)
 	return rc_close;
 }
 
+#if (TF_SHADOW == 1)
+int
+tf_rm_shadow_db_init(struct tf_session *tfs)
+{
+	rc = 1;
+
+	return rc;
+}
+#endif /* TF_SHADOW */
+
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_L2_CTXT_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_PROF_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_WC_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Tcam type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "%s:, Tcam type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_FULL_ACTION_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_TX)
+			break;
+		TF_RM_GET_POOLS_RX(tfs, pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_8B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_16B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		/* No pools for RX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_SP_SMAC_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_STATS_64B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_SPORT_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_S_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_D_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_INST_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_MIRROR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_UPAR:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_UPAR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH0_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH1_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METADATA:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METADATA_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_CT_STATE_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_ENTRY_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_LAG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_LAG_ENTRY_POOL_NAME,
+				rc);
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+		break;
+	/* No bitalloc pools for these types */
+	case TF_TBL_TYPE_EXT:
+	case TF_TBL_TYPE_EXT_0:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type lookup failed, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_rm_convert_tbl_type(enum tf_tbl_type type,
 		       uint32_t *hcapi_type)
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 34b6c41..fed34f1 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -12,6 +12,7 @@
 #include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_tbl.h"
 
 /** Session defines
  */
@@ -285,6 +286,15 @@ struct tf_session {
 
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+	/** Table scope array */
+	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/** Each external pool is associated with a single table scope
+	 *  For each external pool store the associated table scope in
+	 *  this data structure
+	 */
+	uint32_t ext_pool_2_scope[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
new file mode 100644
index 0000000..5a5e72f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TBL_H_
+#define _TF_TBL_H_
+
+#include <stdint.h>
+
+enum tf_pg_tbl_lvl {
+	PT_LVL_0,
+	PT_LVL_1,
+	PT_LVL_2,
+	PT_LVL_MAX
+};
+
+/** Invalid table scope id */
+#define TF_TBL_SCOPE_INVALID 0xffffffff
+
+/**
+ * Table Scope Control Block
+ *
+ * Holds private data for a table scope. Only one instance of a table
+ * scope with Internal EM is supported.
+ */
+struct tf_tbl_scope_cb {
+	uint32_t tbl_scope_id;
+	int index;
+	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
+};
+
+/**
+ * Initialize table pool structure to indicate
+ * no table scope has been associated with the
+ * external pool of indexes.
+ *
+ * [in] session
+ */
+void
+tf_init_tbl_pool(struct tf_session *session);
+
+#endif /* _TF_TBL_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 09/33] net/bnxt: add tf core identifier support
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (7 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 08/33] net/bnxt: add resource manager functionality Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 10/33] net/bnxt: add tf core TCAM support Venkat Duvvuru
                   ` (24 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith

From: Farah Smith <farah.smith@broadcom.com>

- Add TruFlow Identifier resource support
- Add TruFlow public API for Identifier resources.
- Add support code and stack for Identifier resource allocation control.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 156 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h |  55 +++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  13 ++++
 3 files changed, 224 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 259ffa2..037f7d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -283,3 +283,159 @@ tf_close_session(struct tf *tfp)
 
 	return rc_close;
 }
+
+/** allocate identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	int id;
+	int rc;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	id = ba_alloc(session_pool);
+
+	if (id == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return -ENOMEM;
+	}
+	parms->id = id;
+	return 0;
+}
+
+/** free identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	int rc;
+	int ba_rc;
+	struct tf_session *tfs;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	ba_rc = ba_inuse(session_pool, (int)parms->id);
+
+	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type),
+			    parms->id);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->id);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 16c8251..afad9ea 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -402,6 +402,61 @@ enum tf_identifier_type {
 	TF_IDENT_TYPE_L2_FUNC
 };
 
+/** tf_alloc_identifier parameter definition
+ */
+struct tf_alloc_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/** tf_free_identifier parameter definition
+ */
+struct tf_free_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/** allocate identifier resource
+ *
+ * TruFlow core will allocate a free id from the per identifier resource type
+ * pool reserved for the session during tf_open().  No firmware is involved.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms);
+
+/** free identifier resource
+ *
+ * TruFlow core will return an id back to the per identifier resource type pool
+ * reserved for the session.  No firmware is involved.  During tf_close, the
+ * complete pool is returned to the firmware.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms);
+
 /**
  * TCAM table type
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 4ce2bc5..c44f96f 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -94,6 +94,19 @@
 } while (0)
 
 /**
+ * This is the MAX data we can transport across regular HWRM
+ */
+#define TF_PCI_BUF_SIZE_MAX 88
+
+/**
+ * If data bigger than TF_PCI_BUF_SIZE_MAX then use DMA method
+ */
+struct tf_msg_dma_buf {
+	void *va_addr;
+	uint64_t pa_addr;
+};
+
+/**
  * Sends session open request to TF Firmware
  */
 int
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 10/33] net/bnxt: add tf core TCAM support
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (8 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 09/33] net/bnxt: add tf core identifier support Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 11/33] net/bnxt: add tf core table scope support Venkat Duvvuru
                   ` (23 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Jay Ding

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow TCAM public API functions
- Add TCAM support functions as well as public APIs.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 163 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h | 227 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  | 159 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  30 +++++
 4 files changed, 579 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 037f7d1..152cfa2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -439,3 +439,166 @@ int tf_free_identifier(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tcam_entry(struct tf *tfp,
+		    struct tf_alloc_tcam_entry_parms *parms)
+{
+	int rc;
+	int index;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
+	}
+
+	parms->idx = index;
+	return 0;
+}
+
+int
+tf_set_tcam_entry(struct tf *tfp,
+		  struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	int id;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/*
+	 * Each tcam send msg function should check for key sizes range
+	 */
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, parms->idx);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "%s: %s: Invalid or not allocated index, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+
+	return rc;
+}
+
+int
+tf_get_tcam_entry(struct tf *tfp __rte_unused,
+		  struct tf_get_tcam_entry_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+int
+tf_free_tcam_entry(struct tf *tfp,
+		   struct tf_free_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	rc = ba_inuse(session_pool, (int)parms->idx);
+	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->idx);
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index afad9ea..1431d06 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -472,6 +472,233 @@ enum tf_tcam_tbl_type {
 };
 
 /**
+ * @page tcam TCAM Access
+ *
+ * @ref tf_alloc_tcam_entry
+ *
+ * @ref tf_set_tcam_entry
+ *
+ * @ref tf_get_tcam_entry
+ *
+ * @ref tf_free_tcam_entry
+ */
+
+/** tf_alloc_tcam_entry parameter definition
+ */
+struct tf_alloc_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/** allocate TCAM entry
+ *
+ * Allocate a TCAM entry - one of these types:
+ *
+ * L2 Context
+ * Profile TCAM
+ * WC TCAM
+ * VEB TCAM
+ *
+ * This function allocates a TCAM table record.	 This function
+ * will attempt to allocate a TCAM table entry from the session
+ * owned TCAM entries or search a shadow copy of the TCAM table for a
+ * matching entry if search is enabled.	 Key, mask and result must match for
+ * hit to be set.  Only TruFlow core data is accessed.
+ * A hash table to entry mapping is maintained for search purposes.  If
+ * search is not enabled, the first available free entry is returned based
+ * on priority and alloc_cnt is set to 1.  If search is enabled and a matching
+ * entry to entry_data is found, hit is set to TRUE and alloc_cnt is set to 1.
+ * RefCnt is also returned.
+ *
+ * Also returns success or failure code.
+ */
+int tf_alloc_tcam_entry(struct tf *tfp,
+			struct tf_alloc_tcam_entry_parms *parms);
+
+/** tf_set_tcam_entry parameter definition
+ */
+struct	tf_set_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] base index of the entry to program
+	 */
+	uint16_t idx;
+	/**
+	 * [in] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [in] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** set TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tcam_entry(struct tf	*tfp,
+		      struct tf_set_tcam_entry_parms *parms);
+
+/** tf_get_tcam_entry parameter definition
+ */
+struct tf_get_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type  tcam_tbl_type;
+	/**
+	 * [in] index of the entry to get
+	 */
+	uint16_t idx;
+	/**
+	 * [out] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [out] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [out] key size in bits
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [out] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** get TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_get_tcam_entry(struct tf *tfp,
+		      struct tf_get_tcam_entry_parms *parms);
+
+/** tf_free_tcam_entry parameter definition
+ */
+struct tf_free_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t idx;
+	/**
+	 * [out] reference count after free
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free TCAM entry
+ *
+ * Free TCAM entry.
+ *
+ * Firmware checks to ensure the TCAM entries are owned by the TruFlow
+ * session.  TCAM entry will be invalidated.  All-ones mask.
+ * writes to hw.
+ *
+ * WCTCAM profile id of 0 must be used to invalidate an entry.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tcam_entry(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *parms);
+
+/**
+ * @page table Table Access
+ *
+ * @ref tf_alloc_tbl_entry
+ *
+ * @ref tf_free_tbl_entry
+ *
+ * @ref tf_set_tbl_entry
+ *
+ * @ref tf_get_tbl_entry
+ */
+
+/**
  * Enumeration of TruFlow table types. A table type is used to identify a
  * resource object.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c44f96f..f4b2f4c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -106,6 +106,39 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
+static int
+tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
+		   uint32_t *hwrm_type)
+{
+	int rc = 0;
+
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		rc = -EOPNOTSUPP;
+		break;
+	}
+
+	return rc;
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -835,3 +868,129 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
+static int
+tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate tcam dma entry, rc:%d\n",
+			    rc);
+		return -ENOMEM;
+	}
+
+	buf->pa_addr = (uint64_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	uint16_t key_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	uint16_t result_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
+
+	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
+
+	req.key_size = key_bytes;
+	req.mask_offset = key_bytes;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * key_bytes;
+	req.result_size = result_bytes;
+	data_size = 2 * req.key_size + req.result_size;
+
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_get_dma_buf(&buf, data_size);
+		if (rc != 0)
+			return rc;
+		data = buf.va_addr;
+		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+	}
+
+	memcpy(&data[0], parms->key, key_bytes);
+	memcpy(&data[key_bytes], parms->mask, key_bytes);
+	memcpy(&data[req.result_offset], parms->result, result_bytes);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &mparms);
+	if (rc)
+		return rc;
+
+	if (buf.va_addr != NULL)
+		tfp_free(buf.va_addr);
+
+	return rc;
+}
+
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *in_parms)
+{
+	int rc;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
+
+	parms.tf_type = HWRM_TF_TCAM_FREE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 057de84..fa74d78 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -120,4 +120,34 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends tcam entry 'set' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_set(struct tf *tfp,
+			  struct tf_set_tcam_entry_parms *parms);
+
+/**
+ * Sends tcam entry 'free' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_free(struct tf *tfp,
+			   struct tf_free_tcam_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 11/33] net/bnxt: add tf core table scope support
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (9 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 10/33] net/bnxt: add tf core TCAM support Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 12/33] net/bnxt: add EM/EEM functionality Venkat Duvvuru
                   ` (22 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith, Michael Wildt

From: Farah Smith <farah.smith@broadcom.com>

- Added TruFlow Table public API
- Added Table Scope capability including Table Type support code for
  setting and getting Table Types.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile          |   1 +
 drivers/net/bnxt/tf_core/hwrm_tf.h |  21 ++++++
 drivers/net/bnxt/tf_core/tf_core.c |   4 ++
 drivers/net/bnxt/tf_core/tf_core.h | 128 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  81 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  63 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c  |  43 +++++++++++++
 7 files changed, 341 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index ed8b1e2..b97abb6 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -52,6 +52,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_rm.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index a8a5547..acb9a8b 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -891,6 +891,27 @@ typedef struct tf_session_sram_resc_flush_input {
 } tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
 BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
 
+/* Input params for table type set */
+typedef struct tf_tbl_type_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+	/* Index to set */
+	uint32_t			 index;
+} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_set_input_t) <= TF_MAX_REQ_SIZE);
+
 /* Input params for table type get */
 typedef struct tf_tbl_type_get_input {
 	/* Session Id */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 152cfa2..2833de2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,6 +7,7 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -172,6 +173,9 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize external pool data structures */
+	tf_init_tbl_pool(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1431d06..4c90677 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -458,6 +458,134 @@ int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
 
 /**
+ * @page dram_table DRAM Table Scope Interface
+ *
+ * @ref tf_alloc_tbl_scope
+ *
+ * @ref tf_free_tbl_scope
+ *
+ * If we allocate the EEM memory from the core, we need to store it in
+ * the shared session data structure to make sure it can be freed later.
+ * (for example if the PF goes away)
+ *
+ * Current thought is that memory is allocated within core.
+ */
+
+
+/** tf_alloc_tbl_scope_parms definition
+ */
+struct tf_alloc_tbl_scope_parms {
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t rx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t rx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 * Use this variable or the number of flows, do not set both.
+	 */
+	uint32_t rx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000. If set, rx_mem_size_in_mb must equal 0.
+	 */
+	uint32_t rx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t rx_tbl_if_id;
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t tx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t tx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 */
+	uint32_t tx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000
+	 */
+	uint32_t tx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t tx_tbl_if_id;
+	/**
+	 * [out] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+struct tf_free_tbl_scope_parms {
+	/**
+	 * [in] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+/**
+ * allocate a table scope
+ *
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * is a software construct to identify an EEM table.  This function will
+ * divide the hash memory/buckets and records according to the device
+ * device constraints based upon calculations using either the number of flows
+ * requested or the size of memory indicated.  Other parameters passed in
+ * determine the configuration (maximum key size, maximum external action record
+ * size.
+ *
+ * This API will allocate the table region in
+ * DRAM, program the PTU page table entries, and program the number of static
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * 0 in the EM memory for the scope.  Upon successful completion of this API,
+ * hash tables are fully initialized and ready for entries to be inserted.
+ *
+ * A single API is used to allocate a common table scope identifier in both
+ * receive and transmit CFA. The scope identifier is common due to nature of
+ * connection tracking sending notifications between RX and TX direction.
+ *
+ * The receive and transmit table access identifiers specify which rings will
+ * be used to initialize table DRAM.  The application must ensure mutual
+ * exclusivity of ring usage for table scope allocation and any table update
+ * operations.
+ *
+ * The hash table buckets, EM keys, and EM lookup results are stored in the
+ * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
+ * hash table buckets are stored at the beginning of that memory.
+ *
+ * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * internally by TruFlow core.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms);
+
+
+/**
+ * free a table scope
+ *
+ * Firmware checks that the table scope ID is owned by the TruFlow
+ * session, verifies that no references to this table scope remains
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * then frees the table scope ID.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_scope(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms);
+
+/**
  * TCAM table type
  */
 enum tf_tcam_tbl_type {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index f4b2f4c..b9ed127 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,87 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_set_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_set_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.size = tfp_cpu_to_le_16(size);
+	req.index = tfp_cpu_to_le_32(index);
+
+	tfp_memcpy(&req.data,
+		   data,
+		   size);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_TBL_TYPE_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_get_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_input req = { 0 };
+	struct tf_tbl_type_get_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.index = tfp_cpu_to_le_32(index);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < size)
+		return -EINVAL;
+
+	tfp_memcpy(data,
+		   &resp.data,
+		   resp.size);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 #define TF_BYTES_PER_SLICE(tfp) 12
 #define NUM_SLICES(tfp, bytes) \
 	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fa74d78..9055b16 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,6 +6,7 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include "tf_tbl.h"
 #include "tf_rm.h"
 
 struct tf;
@@ -150,4 +151,66 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
 int tf_msg_tcam_entry_free(struct tf *tfp,
 			   struct tf_free_tcam_entry_parms *parms);
 
+/**
+ * Sends Set message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to set
+ *
+ * [in] type
+ *   Type of the object to set
+ *
+ * [in] size
+ *   Size of the data to set
+ *
+ * [in] data
+ *   Data to set
+ *
+ * [in] index
+ *   Index to set
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_set_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
+/**
+ * Sends get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to get
+ *
+ * [in] type
+ *   Type of the object to get
+ *
+ * [in] size
+ *   Size of the data read
+ *
+ * [in] data
+ *   Data read
+ *
+ * [in] index
+ *   Index to get
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_get_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
new file mode 100644
index 0000000..14bf4ef
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Truflow Table APIs and supporting code */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include "hsi_struct_def_dpdk.h"
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "hwrm_tf.h"
+#include "bnxt.h"
+#include "tf_resources.h"
+#include "tf_rm.h"
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+/* API defined in tf_tbl.h */
+void
+tf_init_tbl_pool(struct tf_session *session)
+{
+	enum tf_dir dir;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		session->ext_pool_2_scope[dir][TF_EXT_POOL_0] =
+			TF_TBL_SCOPE_INVALID;
+	}
+}
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 12/33] net/bnxt: add EM/EEM functionality
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (10 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 11/33] net/bnxt: add tf core table scope support Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 13/33] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
                   ` (21 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add TruFlow flow memory support
- Exact Match (EM) adds the capability to manage and manipulate
  data flows using on chip memory.
- Extended Exact Match (EEM) behaves similarly to EM, but at a
  vastly increased scale by using host DDR, with performance
  tradeoff due to the need to access off-chip memory.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |    2 +
 drivers/net/bnxt/tf_core/lookup3.h            |  161 +++
 drivers/net/bnxt/tf_core/stack.c              |  107 ++
 drivers/net/bnxt/tf_core/stack.h              |  107 ++
 drivers/net/bnxt/tf_core/tf_core.c            |   51 +
 drivers/net/bnxt/tf_core/tf_core.h            |  480 ++++++-
 drivers/net/bnxt/tf_core/tf_em.c              |  516 +++++++
 drivers/net/bnxt/tf_core/tf_em.h              |  117 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |  166 +++
 drivers/net/bnxt/tf_core/tf_msg.c             |  171 +++
 drivers/net/bnxt/tf_core/tf_msg.h             |   40 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1795 ++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   83 ++
 13 files changed, 3789 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index b97abb6..c950c6d 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
new file mode 100644
index 0000000..b1fd2cd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Based on lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ * http://www.burtleburtle.net/bob/c/lookup3.c
+ *
+ * These functions for producing 32-bit hashes for has table lookup.
+ * hashword(), hashlittle(), hashlittle2(), hashbig(), mix(), and final()
+ * are externally useful functions. Routines to test the hash are included
+ * if SELF_TEST is defined. You can use this free for any purpose. It is in
+ * the public domain. It has no warranty.
+ */
+
+#ifndef _LOOKUP3_H_
+#define _LOOKUP3_H_
+
+#define rot(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
+
+/** -------------------------------------------------------------------------
+ * This is reversible, so any information in (a,b,c) before mix() is
+ * still in (a,b,c) after mix().
+ *
+ * If four pairs of (a,b,c) inputs are run through mix(), or through
+ * mix() in reverse, there are at least 32 bits of the output that
+ * are sometimes the same for one pair and different for another pair.
+ * This was tested for:
+ *   pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+ * satisfy this are
+ *     4  6  8 16 19  4
+ *     9 15  3 18 27 15
+ *    14  9  3  7 17  3
+ * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
+ * for "differ" defined as + with a one-bit base and a two-bit delta.  I
+ * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+ * the operations, constants, and arrangements of the variables.
+ *
+ * This does not achieve avalanche.  There are input bits of (a,b,c)
+ * that fail to affect some output bits of (a,b,c), especially of a.  The
+ * most thoroughly mixed value is c, but it doesn't really even achieve
+ * avalanche in c.
+ *
+ * This allows some parallelism.  Read-after-writes are good at doubling
+ * the number of bits affected, so the goal of mixing pulls in the opposite
+ * direction as the goal of parallelism.  I did what I could.  Rotates
+ * seem to cost as much as shifts on every machine I could lay my hands
+ * on, and rotates are much kinder to the top and bottom bits, so I used
+ * rotates.
+ * --------------------------------------------------------------------------
+ */
+#define mix(a, b, c) \
+{ \
+	(a) -= (c); (a) ^= rot((c), 4);  (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 6);  (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 8);  (b) += a; \
+	(a) -= (c); (a) ^= rot((c), 16); (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 19); (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 4);  (b) += a; \
+}
+
+/** --------------------------------------------------------------------------
+ * final -- final mixing of 3 32-bit values (a,b,c) into c
+ *
+ * Pairs of (a,b,c) values differing in only a few bits will usually
+ * produce values of c that look totally different.  This was tested for
+ *  pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * These constants passed:
+ *  14 11 25 16 4 14 24
+ *  12 14 25 16 4 14 24
+ * and these came close:
+ *   4  8 15 26 3 22 24
+ *  10  8 15 26 3 22 24
+ *  11  8 15 26 3 22 24
+ * --------------------------------------------------------------------------
+ */
+#define final(a, b, c) \
+{ \
+	(c) ^= (b); (c) -= rot((b), 14); \
+	(a) ^= (c); (a) -= rot((c), 11); \
+	(b) ^= (a); (b) -= rot((a), 25); \
+	(c) ^= (b); (c) -= rot((b), 16); \
+	(a) ^= (c); (a) -= rot((c), 4);  \
+	(b) ^= (a); (b) -= rot((a), 14); \
+	(c) ^= (b); (c) -= rot((b), 24); \
+}
+
+/** --------------------------------------------------------------------
+ *  This works on all machines.  To be useful, it requires
+ *  -- that the key be an array of uint32_t's, and
+ *  -- that the length be the number of uint32_t's in the key
+
+ *  The function hashword() is identical to hashlittle() on little-endian
+ *  machines, and identical to hashbig() on big-endian machines,
+ *  except that the length has to be measured in uint32_ts rather than in
+ *  bytes. hashlittle() is more complicated than hashword() only because
+ *  hashlittle() has to dance around fitting the key bytes into registers.
+ *
+ *  Input Parameters:
+ *	 key: an array of uint32_t values
+ *	 length: the length of the key, in uint32_ts
+ *	 initval: the previous hash, or an arbitrary value
+ * --------------------------------------------------------------------
+ */
+static inline uint32_t hashword(const uint32_t *k,
+				size_t length,
+				uint32_t initval) {
+	uint32_t a, b, c;
+	int index = 12;
+
+	/* Set up the internal state */
+	a = 0xdeadbeef + (((uint32_t)length) << 2) + initval;
+	b = a;
+	c = a;
+
+	/*-------------------------------------------- handle most of the key */
+	while (length > 3) {
+		a += k[index];
+		b += k[index - 1];
+		c += k[index - 2];
+		mix(a, b, c);
+		length -= 3;
+		index -= 3;
+	}
+
+	/*-------------------------------------- handle the last 3 uint32_t's */
+	switch (length) {	      /* all the case statements fall through */
+	case 3:
+		c += k[index - 2];
+		/* Falls through. */
+	case 2:
+		b += k[index - 1];
+		/* Falls through. */
+	case 1:
+		a += k[index];
+		final(a, b, c);
+		/* Falls through. */
+	case 0:	    /* case 0: nothing left to add */
+		break;
+	}
+	/*------------------------------------------------- report the result */
+	return c;
+}
+
+#endif /* _LOOKUP3_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
new file mode 100644
index 0000000..3337073
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <errno.h>
+#include "stack.h"
+
+#define STACK_EMPTY -1
+
+/* Initialize stack
+ */
+int
+stack_init(int num_entries, uint32_t *items, struct stack *st)
+{
+	if (items == NULL || st == NULL)
+		return -EINVAL;
+
+	st->max = num_entries;
+	st->top = STACK_EMPTY;
+	st->items = items;
+
+	return 0;
+}
+
+/* Return the size of the stack
+ */
+int32_t
+stack_size(struct stack *st)
+{
+	return st->top + 1;
+}
+
+/* Check if the stack is empty
+ */
+bool
+stack_is_empty(struct stack *st)
+{
+	return st->top == STACK_EMPTY;
+}
+
+/* Check if the stack is full
+ */
+bool
+stack_is_full(struct stack *st)
+{
+	return st->top == st->max - 1;
+}
+
+/* Add  element x to  the stack
+ */
+int
+stack_push(struct stack *st, uint32_t x)
+{
+	if (stack_is_full(st))
+		return -EOVERFLOW;
+
+	/* add an element and increments the top index
+	 */
+	st->items[++st->top] = x;
+
+	return 0;
+}
+
+/* Pop top element x from the stack and return
+ * in user provided location.
+ */
+int
+stack_pop(struct stack *st, uint32_t *x)
+{
+	if (stack_is_empty(st))
+		return -ENODATA;
+
+	*x = st->items[st->top];
+	st->top--;
+
+	return 0;
+}
+
+/* Dump the stack
+ */
+void stack_dump(struct stack *st)
+{
+	int i, j;
+
+	printf("top=%d\n", st->top);
+	printf("max=%d\n", st->max);
+
+	if (st->top == -1) {
+		printf("stack is empty\n");
+		return;
+	}
+
+	for (i = 0; i < st->max + 7 / 8; i++) {
+		printf("item[%d] 0x%08x", i, st->items[i]);
+
+		for (j = 0; j < 7; j++) {
+			if (i++ < st->max - 1)
+				printf(" 0x%08x", st->items[i]);
+		}
+		printf("\n");
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
new file mode 100644
index 0000000..6fe8829
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _STACK_H_
+#define _STACK_H_
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/** Stack data structure
+ */
+struct stack {
+	int max;         /**< Maximum number of entries */
+	int top;         /**< maximum value in stack */
+	uint32_t *items; /**< items in the stack */
+};
+
+/** Initialize stack of uint32_t elements
+ *
+ *  [in] num_entries
+ *    maximum number of elemnts in the stack
+ *
+ *  [in] items
+ *    pointer to items (must be sized to (uint32_t * num_entries)
+ *
+ *  s[in] st
+ *    pointer to the stack structure
+ *
+ *  return
+ *    0 for success
+ */
+int stack_init(int num_entries,
+	       uint32_t *items,
+	       struct stack *st);
+
+/** Return the size of the stack
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    number of elements
+ */
+int32_t stack_size(struct stack *st);
+
+/** Check if the stack is empty
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_empty(struct stack *st);
+
+/** Check if the stack is full
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_full(struct stack *st);
+
+/** Add  element x to  the stack
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in] x
+ *   value to push on the stack
+ * return
+ *  0 for success
+ */
+int stack_push(struct stack *st, uint32_t x);
+
+/** Pop top element x from the stack and return
+ * in user provided location.
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in, out] x
+ *  pointer to where the value popped will be written
+ *
+ * return
+ *  0 for success
+ */
+int stack_pop(struct stack *st, uint32_t *x);
+
+/** Dump stack information
+ *
+ * Warning: Don't use for large stacks due to prints
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *    none
+ */
+void stack_dump(struct stack *st);
+
+#endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 2833de2..8f037a2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -8,6 +8,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
+#include "tf_em.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -288,6 +289,56 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Process the EM entry per Table Scope type */
+	return tf_insert_eem_entry(
+		(struct tf_session *)(tfp->session->core_data),
+		tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	return tf_delete_eem_entry(tfp, parms);
+}
+
 /** allocate identifier resource
  *
  * Returns success or failure code.
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 4c90677..34e643c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -21,6 +21,10 @@
 
 /********** BEGIN Truflow Core DEFINITIONS **********/
 
+
+#define TF_KILOBYTE  1024
+#define TF_MEGABYTE  (1024 * 1024)
+
 /**
  * direction
  */
@@ -31,6 +35,27 @@ enum tf_dir {
 };
 
 /**
+ * memory choice
+ */
+enum tf_mem {
+	TF_MEM_INTERNAL, /**< Internal */
+	TF_MEM_EXTERNAL, /**< External */
+	TF_MEM_MAX
+};
+
+/**
+ * The size of the external action record (Wh+/Brd2)
+ *
+ * Currently set to 512.
+ *
+ * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
+ * + stats (16) = 304 aligned on a 16B boundary
+ *
+ * Theoretically, the size should be smaller. ~304B
+ */
+#define TF_ACTION_RECORD_SZ 512
+
+/**
  * External pool size
  *
  * Defines a single pool of external action records of
@@ -56,6 +81,23 @@ enum tf_dir {
 #define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
 #define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
 
+/** EEM record AR helper
+ *
+ * Helpers to handle the Action Record Pointer in the EEM Record Entry.
+ *
+ * Convert absolute offset to action record pointer in EEM record entry
+ * Convert action record pointer in EEM record entry to absolute offset
+ */
+#define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
+#define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
+
+#define TF_ACT_REC_INDEX_2_OFFSET(idx) ((idx) << 9)
+
+/*
+ * Helper Macros
+ */
+#define TF_BITS_2_BYTES(num_bits) (((num_bits) + 7) / 8)
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
@@ -495,7 +537,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -517,7 +559,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -536,7 +578,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -546,7 +588,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -563,7 +605,7 @@ struct tf_free_tbl_scope_parms {
  * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
  * hash table buckets are stored at the beginning of that memory.
  *
- * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * NOTE:  No EM internal setup is done here. On chip EM records are managed
  * internally by TruFlow core.
  *
  * Returns success or failure code.
@@ -577,7 +619,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -905,4 +947,430 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_EXT_0,
 	TF_TBL_TYPE_MAX
 };
+
+/** tf_alloc_tbl_entry parameter definition
+ */
+struct tf_alloc_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/** allocate index table entries
+ *
+ * Internal types:
+ *
+ * Allocate an on chip index table entry or search for a matching
+ * entry of the indicated type for this TruFlow session.
+ *
+ * Allocates an index table record. This function will attempt to
+ * allocate an entry or search an index table for a matching entry if
+ * search is enabled (only the shadow copy of the table is accessed).
+ *
+ * If search is not enabled, the first available free entry is
+ * returned. If search is enabled and a matching entry to entry_data
+ * is found hit is set to TRUE and success is returned.
+ *
+ * External types:
+ *
+ * These are used to allocate inlined action record memory.
+ *
+ * Allocates an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_entry(struct tf *tfp,
+		       struct tf_alloc_tbl_entry_parms *parms);
+
+/** tf_free_tbl_entry parameter definition
+ */
+struct tf_free_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free index table entry
+ *
+ * Used to free a previously allocated table entry.
+ *
+ * Internal types:
+ *
+ * If session has shadow_copy enabled the shadow DB is searched and if
+ * found the element ref_cnt is decremented. If ref_cnt goes to
+ * zero then the element is returned to the session pool.
+ *
+ * If the session does not have a shadow DB the element is free'ed and
+ * given back to the session pool.
+ *
+ * External types:
+ *
+ * Free's an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_entry(struct tf *tfp,
+		      struct tf_free_tbl_entry_parms *parms);
+
+/** tf_set_tbl_entry parameter definition
+ */
+struct tf_set_tbl_entry_parms {
+	/**
+	 * [in] Table scope identifier
+	 *
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/** set index table entry
+ *
+ * Used to insert an application programmed index table entry into a
+ * previous allocated table location.  A shadow copy of the table
+ * is maintained (if enabled) (only for internal objects)
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tbl_entry(struct tf *tfp,
+		     struct tf_set_tbl_entry_parms *parms);
+
+/** tf_get_tbl_entry parameter definition
+ */
+struct tf_get_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/** get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_tbl_entry(struct tf *tfp,
+		     struct tf_get_tbl_entry_parms *parms);
+
+/**
+ * @page exact_match Exact Match Table
+ *
+ * @ref tf_insert_em_entry
+ *
+ * @ref tf_delete_em_entry
+ *
+ * @ref tf_search_em_entry
+ *
+ */
+/** tf_insert_em_entry parameter definition
+ */
+struct tf_insert_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] ptr to structure containing result field
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] duplicate check flag
+	 */
+	uint8_t	dup_check;
+	/**
+	 * [out] Flow handle value for the inserted entry.  This is encoded
+	 * as the entries[4]:bucket[2]:hashId[1]:hash[14]
+	 */
+	uint64_t flow_handle;
+	/**
+	 * [out] Flow id is returned as null (internal)
+	 * Flow id is the GFID value for the inserted entry (external)
+	 * This is the value written to the BD and useful information for mark.
+	 */
+	uint64_t flow_id;
+};
+/**
+ * tf_delete_em_entry parameter definition
+ */
+struct tf_delete_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] epoch group IDs of entry to delete
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] structure containing flow delete handle information
+	 */
+	uint64_t flow_handle;
+};
+/**
+ * tf_search_em_entry parameter definition
+ */
+struct tf_search_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in/out] ptr to structure containing EM record fields
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] epoch group IDs of entry to lookup
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] ptr to structure containing flow delete handle
+	 */
+	uint64_t flow_handle;
+};
+
+/** insert em hash entry in internal table memory
+ *
+ * Internal:
+ *
+ * This API inserts an exact match entry into internal EM table memory
+ * of the specified direction.
+ *
+ * Note: The EM record is managed within the TruFlow core and not the
+ * application.
+ *
+ * Shadow copy of internal record table an association with hash and 1,2, or 4
+ * associated buckets
+ *
+ * External:
+ * This API inserts an exact match entry into DRAM EM table memory of the
+ * specified direction and table scope.
+ *
+ * When inserting an entry into an exact match table, the TruFlow library may
+ * need to allocate a dynamic bucket for the entry (Brd4 only).
+ *
+ * The insertion of duplicate entries in an EM table is not permitted.	If a
+ * TruFlow application can guarantee that it will never insert duplicates, it
+ * can disable duplicate checking by passing a zero value in the  dup_check
+ * parameter to this API.  This will optimize performance. Otherwise, the
+ * TruFlow library will enforce protection against inserting duplicate entries.
+ *
+ * Flow handle is defined in this document:
+ *
+ * https://docs.google.com
+ * /document/d/1NESu7RpTN3jwxbokaPfYORQyChYRmJgs40wMIRe8_-Q/edit
+ *
+ * Returns success or busy code.
+ *
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
+
+/** delete em hash entry table memory
+ *
+ * Internal:
+ *
+ * This API deletes an exact match entry from internal EM table memory of the
+ * specified direction. If a valid flow ptr is passed in then that takes
+ * precedence over the pointer to the complete key passed in.
+ *
+ *
+ * External:
+ *
+ * This API deletes an exact match entry from EM table memory of the specified
+ * direction and table scope. If a valid flow handle is passed in then that
+ * takes precedence over the pointer to the complete key passed in.
+ *
+ * The TruFlow library may release a dynamic bucket when an entry is deleted.
+ *
+ *
+ * Returns success or not found code
+ *
+ *
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
+
+/** search em hash entry table memory
+ *
+ * Internal:
+
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow (flow takes precedence) and direction.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * If flow_handle is set, search shadow copy.
+ *
+ * Otherwise, query the fw with key to get result.
+ *
+ * External:
+ *
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow_handle (flow takes precedence), direction and table scope.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * Returns success or not found code
+ *
+ */
+int tf_search_em_entry(struct tf *tfp,
+		       struct tf_search_em_entry_parms *parms);
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
new file mode 100644
index 0000000..7109eb1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -0,0 +1,516 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/* Enable EEM table dump
+ */
+#define TF_EEM_DUMP
+
+static struct tf_eem_64b_entry zero_key_entry;
+
+static uint32_t tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & 0x7FFF)
+		return 0;
+
+	if (num_entries > (128 * 1024 * 1024))
+		return 0;
+
+	return mask;
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+	}
+
+	return ~crc;
+}
+
+static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
+					  uint8_t *key,
+					  enum tf_dir dir)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = session->lkup_em_seed_mem[dir][index * 2];
+	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = crc32i(~val1, temp, 4);
+
+	val1 = crc32i(~val1,
+		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
+		      TF_HW_EM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
+					    uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((uint32_t *)in_key) + 1,
+			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 lookup3_init_value);
+
+	return val1;
+}
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	void *addr = NULL;
+	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
+
+	if (ctx == NULL)
+		return NULL;
+
+	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
+		return NULL;
+
+	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+		return NULL;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = ctx->em_tables[table_type].num_lvl - 1;
+
+	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Read Key table entry
+ *
+ * Entry is read in to entry
+ */
+static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
+	return 0;
+}
+
+static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
+
+	return 0;
+}
+
+static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_eem_64b_entry *entry,
+			       uint32_t index,
+			       enum tf_em_table_type table_type,
+			       enum tf_dir dir)
+{
+	int rc;
+	struct tf_eem_64b_entry table_entry;
+
+	rc = tf_em_read_entry(tbl_scope_cb,
+			      &table_entry,
+			      TF_EM_KEY_RECORD_SIZE,
+			      index,
+			      table_type,
+			      dir);
+
+	if (rc != 0)
+		return -EINVAL;
+
+	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
+		if (entry != NULL) {
+			if (memcmp(&table_entry,
+				   entry,
+				   TF_EM_KEY_RECORD_SIZE) == 0)
+				return -EEXIST;
+		} else {
+			return -EEXIST;
+		}
+
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
+				    uint8_t	       *in_key,
+				    struct tf_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
+
+/* tf_em_select_inject_table
+ *
+ * Returns:
+ * 0 - Key does not exist in either table and can be inserted
+ *		at "index" in table "table".
+ * EEXIST  - Key does exist in table at "index" in table "table".
+ * TF_ERR     - Something went horribly wrong.
+ */
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+					  enum tf_dir dir,
+					  struct tf_eem_64b_entry *entry,
+					  uint32_t key0_hash,
+					  uint32_t key1_hash,
+					  uint32_t *index,
+					  enum tf_em_table_type *table)
+{
+	int key0_entry;
+	int key1_entry;
+
+	/*
+	 * Check KEY0 table.
+	 */
+	key0_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key0_hash,
+					 KEY0_TABLE,
+					 dir);
+
+	/*
+	 * Check KEY1 table.
+	 */
+	key1_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key1_hash,
+					 KEY1_TABLE,
+					 dir);
+
+	if (key0_entry == -EEXIST) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return -EEXIST;
+	} else if (key1_entry == -EEXIST) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return -EEXIST;
+	} else if (key0_entry == 0) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return 0;
+	} else if (key1_entry == 0) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+int tf_insert_eem_entry(struct tf_session	   *session,
+			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t	   mask;
+	uint32_t	   key0_hash;
+	uint32_t	   key1_hash;
+	uint32_t	   key0_index;
+	uint32_t	   key1_index;
+	struct tf_eem_64b_entry key_entry;
+	uint32_t	   index;
+	enum tf_em_table_type table_type;
+	uint32_t	   gfid;
+	int		   num_of_entry;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+
+	key0_hash = tf_em_lkup_get_crc32_hash(session,
+				      &parms->key[num_of_entry] - 1,
+				      parms->dir);
+	key0_index = key0_hash & mask;
+
+	key1_hash =
+	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
+					parms->key);
+	key1_index = key1_hash & mask;
+
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Find which table to use
+	 */
+	if (tf_em_select_inject_table(tbl_scope_cb,
+				      parms->dir,
+				      &key_entry,
+				      key0_index,
+				      key1_index,
+				      &index,
+				      &table_type) == 0) {
+		if (table_type == KEY0_TABLE) {
+			TF_SET_GFID(gfid,
+				    key0_index,
+				    KEY0_TABLE);
+		} else {
+			TF_SET_GFID(gfid,
+				    key1_index,
+				    KEY1_TABLE);
+		}
+
+		/*
+		 * Inject
+		 */
+		if (tf_em_write_entry(tbl_scope_cb,
+				      &key_entry,
+				      TF_EM_KEY_RECORD_SIZE,
+				      index,
+				      table_type,
+				      parms->dir) == 0) {
+			TF_SET_FLOW_ID(parms->flow_id,
+				       gfid,
+				       TF_GFID_TABLE_EXTERNAL,
+				       parms->dir);
+			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+						     0,
+						     0,
+						     0,
+						     index,
+						     0,
+						     table_type);
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_session	   *session;
+	struct tf_tbl_scope_cb	   *tbl_scope_cb;
+	enum tf_em_table_type hash_type;
+	uint32_t index;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	if (session == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	if (tf_em_entry_exists(tbl_scope_cb,
+			       NULL,
+			       index,
+			       hash_type,
+			       parms->dir) == -EEXIST) {
+		tf_em_write_entry(tbl_scope_cb,
+				  &zero_key_entry,
+				  TF_EM_KEY_RECORD_SIZE,
+				  index,
+				  hash_type,
+				  parms->dir);
+
+		return 0;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
new file mode 100644
index 0000000..8a3584f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_H_
+#define _TF_EM_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+#define TF_HW_EM_KEY_MAX_SIZE 52
+#define TF_EM_KEY_RECORD_SIZE 64
+
+/** EEM Entry header
+ *
+ */
+struct tf_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define TF_LKUP_RECORD_VALID_SHIFT 31
+#define TF_LKUP_RECORD_VALID_MASK 0x80000000
+#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
+#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
+#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
+#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
+#define TF_LKUP_RECORD_RESERVED_SHIFT 17
+#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
+#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
+#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
+#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
+#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
+#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
+#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
+#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
+#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/** EEM Entry
+ *  Each EEM entry is 512-bit (64-bytes)
+ */
+struct tf_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+};
+
+/**
+ * Allocates EEM Table scope
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ *   -ENOMEM - Out of memory
+ */
+int tf_alloc_eem_tbl_scope(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free's EEM Table scope control block
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			     struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *  table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms);
+
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms);
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type);
+
+#endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
new file mode 100644
index 0000000..417a99c
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EXT_FLOW_HANDLE_H_
+#define _TF_EXT_FLOW_HANDLE_H_
+
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK	0x00000000F0000000ULL
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT	28
+#define TF_FLOW_TYPE_FLOW_HANDLE_MASK		0x00000000000000F0ULL
+#define TF_FLOW_TYPE_FLOW_HANDLE_SFT		4
+#define TF_FLAGS_FLOW_HANDLE_MASK		0x000000000000000FULL
+#define TF_FLAGS_FLOW_HANDLE_SFT		0
+#define TF_INDEX_FLOW_HANDLE_MASK		0xFFFFFFF000000000ULL
+#define TF_INDEX_FLOW_HANDLE_SFT		36
+#define TF_ENTRY_NUM_FLOW_HANDLE_MASK		0x0000000E00000000ULL
+#define TF_ENTRY_NUM_FLOW_HANDLE_SFT		33
+#define TF_HASH_TYPE_FLOW_HANDLE_MASK		0x0000000100000000ULL
+#define TF_HASH_TYPE_FLOW_HANDLE_SFT		32
+
+#define TF_FLOW_HANDLE_MASK (TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK |	\
+				TF_FLOW_TYPE_FLOW_HANDLE_MASK |		\
+				TF_FLAGS_FLOW_HANDLE_MASK |		\
+				TF_INDEX_FLOW_HANDLE_MASK |		\
+				TF_ENTRY_NUM_FLOW_HANDLE_MASK |		\
+				TF_HASH_TYPE_FLOW_HANDLE_MASK)
+
+#define TF_GET_FIELDS_FROM_FLOW_HANDLE(flow_handle,			\
+				       num_key_entries,			\
+				       flow_type,			\
+				       flags,				\
+				       index,				\
+				       entry_num,			\
+				       hash_type)			\
+do {									\
+	(num_key_entries) = \
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT);			\
+	(flow_type) = (((flow_handle) & TF_FLOW_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_FLOW_TYPE_FLOW_HANDLE_SFT);			\
+	(flags) = (((flow_handle) & TF_FLAGS_FLOW_HANDLE_MASK) >>	\
+		     TF_FLAGS_FLOW_HANDLE_SFT);				\
+	(index) = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>	\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+	(entry_num) = (((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT);			\
+	(hash_type) = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+#define TF_SET_FIELDS_IN_FLOW_HANDLE(flow_handle,			\
+				     num_key_entries,			\
+				     flow_type,				\
+				     flags,				\
+				     index,				\
+				     entry_num,				\
+				     hash_type)				\
+do {									\
+	(flow_handle) &= ~TF_FLOW_HANDLE_MASK;				\
+	(flow_handle) |= \
+		(((num_key_entries) << TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT) & \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flow_type) << TF_FLOW_TYPE_FLOW_HANDLE_SFT) & \
+			TF_FLOW_TYPE_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flags) << TF_FLAGS_FLOW_HANDLE_SFT) &	\
+			TF_FLAGS_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= ((((uint64_t)index) << TF_INDEX_FLOW_HANDLE_SFT) & \
+			TF_INDEX_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)entry_num) << TF_ENTRY_NUM_FLOW_HANDLE_SFT) & \
+		 TF_ENTRY_NUM_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)hash_type) << TF_HASH_TYPE_FLOW_HANDLE_SFT) & \
+		 TF_HASH_TYPE_FLOW_HANDLE_MASK);			\
+} while (0)
+#define TF_SET_FIELDS_IN_WH_FLOW_HANDLE TF_SET_FIELDS_IN_FLOW_HANDLE
+
+#define TF_GET_INDEX_FROM_FLOW_HANDLE(flow_handle,			\
+				      index)				\
+do {									\
+	index = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>		\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(flow_handle,			\
+					  hash_type)			\
+do {									\
+	hash_type = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >>	\
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+/*
+ * 32 bit Flow ID handlers
+ */
+#define TF_GFID_FLOW_ID_MASK		0xFFFFFFF0UL
+#define TF_GFID_FLOW_ID_SFT		4
+#define TF_FLAG_FLOW_ID_MASK		0x00000002UL
+#define TF_FLAG_FLOW_ID_SFT		1
+#define TF_DIR_FLOW_ID_MASK		0x00000001UL
+#define TF_DIR_FLOW_ID_SFT		0
+
+#define TF_SET_FLOW_ID(flow_id, gfid, flag, dir)			\
+do {									\
+	(flow_id) &= ~(TF_GFID_FLOW_ID_MASK |				\
+		     TF_FLAG_FLOW_ID_MASK |				\
+		     TF_DIR_FLOW_ID_MASK);				\
+	(flow_id) |= (((gfid) << TF_GFID_FLOW_ID_SFT) &			\
+		    TF_GFID_FLOW_ID_MASK) |				\
+		(((flag) << TF_FLAG_FLOW_ID_SFT) &			\
+		 TF_FLAG_FLOW_ID_MASK) |				\
+		(((dir) << TF_DIR_FLOW_ID_SFT) &			\
+		 TF_DIR_FLOW_ID_MASK);					\
+} while (0)
+
+#define TF_GET_GFID_FROM_FLOW_ID(flow_id, gfid)				\
+do {									\
+	gfid = (((flow_id) & TF_GFID_FLOW_ID_MASK) >>			\
+		TF_GFID_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_DIR_FROM_FLOW_ID(flow_id, dir)				\
+do {									\
+	dir = (((flow_id) & TF_DIR_FLOW_ID_MASK) >>			\
+		TF_DIR_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_FLAG_FROM_FLOW_ID(flow_id, flag)				\
+do {									\
+	flag = (((flow_id) & TF_FLAG_FLOW_ID_MASK) >>			\
+		TF_FLAG_FLOW_ID_SFT);					\
+} while (0)
+
+/*
+ * 32 bit GFID handlers
+ */
+#define TF_HASH_INDEX_GFID_MASK	0x07FFFFFFUL
+#define TF_HASH_INDEX_GFID_SFT	0
+#define TF_HASH_TYPE_GFID_MASK	0x08000000UL
+#define TF_HASH_TYPE_GFID_SFT	27
+
+#define TF_GFID_TABLE_INTERNAL 0
+#define TF_GFID_TABLE_EXTERNAL 1
+
+#define TF_SET_GFID(gfid, index, type)					\
+do {									\
+	gfid = (((index) << TF_HASH_INDEX_GFID_SFT) &			\
+		TF_HASH_INDEX_GFID_MASK) |				\
+		(((type) << TF_HASH_TYPE_GFID_SFT) &			\
+		 TF_HASH_TYPE_GFID_MASK);				\
+} while (0)
+
+#define TF_GET_HASH_INDEX_FROM_GFID(gfid, index)			\
+do {									\
+	index = (((gfid) & TF_HASH_INDEX_GFID_MASK) >>			\
+		TF_HASH_INDEX_GFID_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_GFID(gfid, type)				\
+do {									\
+	type = (((gfid) & TF_HASH_TYPE_GFID_MASK) >>			\
+		TF_HASH_TYPE_GFID_SFT);					\
+} while (0)
+
+
+#endif /* _TF_EXT_FLOW_HANDLE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b9ed127..c507ec7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,177 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*ctx_id = tfp_le_to_cpu_16(resp.ctx_id);
+
+	return rc;
+}
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t  *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
+	struct hwrm_tf_ctxt_mem_unrgtr_output resp = {0};
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.ctx_id = tfp_cpu_to_le_32(*ctx_id);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_UNRGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps)
+{
+	int rc;
+	struct hwrm_tf_ext_em_qcaps_input  req = {0};
+	struct hwrm_tf_ext_em_qcaps_output resp = { 0 };
+	uint32_t             flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+
+	parms.tf_type = HWRM_TF_EXT_EM_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_caps->supported = tfp_le_to_cpu_32(resp.supported);
+	em_caps->max_entries_supported =
+		tfp_le_to_cpu_32(resp.max_entries_supported);
+	em_caps->key_entry_size = tfp_le_to_cpu_16(resp.key_entry_size);
+	em_caps->record_entry_size =
+		tfp_le_to_cpu_16(resp.record_entry_size);
+	em_caps->efc_entry_size = tfp_le_to_cpu_16(resp.efc_entry_size);
+
+	return rc;
+}
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t   num_entries,
+		  uint16_t   key0_ctx_id,
+		  uint16_t   key1_ctx_id,
+		  uint16_t   record_ctx_id,
+		  uint16_t   efc_ctx_id,
+		  int        dir)
+{
+	int rc;
+	struct hwrm_tf_ext_em_cfg_input  req = {0};
+	struct hwrm_tf_ext_em_cfg_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	flags |= HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD;
+
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
+
+	req.key0_ctx_id = tfp_cpu_to_le_16(key0_ctx_id);
+	req.key1_ctx_id = tfp_cpu_to_le_16(key1_ctx_id);
+	req.record_ctx_id = tfp_cpu_to_le_16(record_ctx_id);
+	req.efc_ctx_id = tfp_cpu_to_le_16(efc_ctx_id);
+
+	parms.tf_type = HWRM_TF_EXT_EM_CFG;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op)
+{
+	int rc;
+	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
+
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 9055b16..b8d8c1e 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -122,6 +122,46 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id);
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t     *ctx_id);
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps);
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t      num_entries,
+		  uint16_t      key0_ctx_id,
+		  uint16_t      key1_ctx_id,
+		  uint16_t      record_ctx_id,
+		  uint16_t      efc_ctx_id,
+		  int           dir);
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op);
+
+/**
  * Sends tcam entry 'set' to the Firmware.
  *
  * [in] tfp
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 14bf4ef..632df4b 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,7 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
-#include "tf_session.h"
+#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -30,6 +30,1366 @@
 /* Number of pointers per page_size */
 #define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define	TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			PMD_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		PMD_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uint64_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			PMD_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d\n",
+				i);
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			PMD_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(uint32_t *)tp->pg_va_tbl[j],
+				(uint32_t *)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct tf_em_page_tbl *tp,
+		      struct tf_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == PT_LVL_0) {
+		page_cnt[PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == PT_LVL_1) {
+		page_cnt[PT_LVL_1] = num_data_pages;
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else if (max_lvl == PT_LVL_2) {
+		page_cnt[PT_LVL_2] = num_data_pages;
+		page_cnt[PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct tf_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		PMD_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type,
+			    (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	PMD_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[PT_LVL_0],
+		    page_cnt[PT_LVL_1],
+		    page_cnt[PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries
+		= parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries
+		= 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries
+		= parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries
+		= 0;
+
+	return 0;
+}
+
+/**
+ * Internal function to set a Table Entry. Supports all internal Table Types
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_set_tbl_entry_internal(struct tf *tfp,
+			  struct tf_set_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Set failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Get failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+#if (TF_SHADOW == 1)
+/**
+ * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is incremente and
+ * returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count incremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
+			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Alloc with search not supported\n",
+		    parms->dir);
+
+
+	return -EOPNOTSUPP;
+}
+
+/**
+ * Free Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is decremente and
+ * new ref_count returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_free_tbl_entry_shadow(struct tf_session *tfs,
+			 struct tf_free_tbl_entry_parms *parms)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Free with search not supported\n",
+		    parms->dir);
+
+	return -EOPNOTSUPP;
+}
+#endif /* TF_SHADOW */
+
+/**
+ * Create External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ * [in] tbl_scope_id
+ *   id of the table scope
+ * [in] num_entries
+ *   number of entries to write
+ * [in] entry_sz_bytes
+ *   size of each entry
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t table_scope_id,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_pool[dir][TF_EXT_POOL_0];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
+			    dir, strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0] =
+		(uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries * entry_sz_bytes - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+				    dir, strerror(-rc));
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = table_scope_id;
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ *
+ */
+static void
+tf_destroy_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_pool_mem =
+		tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0];
+
+	tfp_free(ext_pool_mem);
+
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = TF_TBL_SCOPE_INVALID;
+}
+
+/**
+ * Allocate External Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_external(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_session *tfs;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope not allocated\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d\n",
+		   parms->dir,
+		   parms->type);
+		return rc;
+	}
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Allocate Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	int free_cnt;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	id = ba_alloc(session_pool);
+	if (id == -1) {
+		free_cnt = ba_free_count(session_pool);
+
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d, free:%d\n",
+		   parms->dir,
+		   parms->type,
+		   free_cnt);
+		return -ENOMEM;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_ADD_BASE,
+			    id,
+			    &index);
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the session pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+static int
+tf_free_tbl_entry_pool_external(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	uint32_t index;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope error\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
+/**
+ * Free Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_free_tbl_entry_pool_internal(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	int id;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	uint32_t index;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Check if element was indeed allocated */
+	id = ba_inuse_free(session_pool, index);
+	if (id == -1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -ENOMEM;
+	}
+
+	return rc;
+}
+
 /* API defined in tf_tbl.h */
 void
 tf_init_tbl_pool(struct tf_session *session)
@@ -41,3 +1401,436 @@ tf_init_tbl_pool(struct tf_session *session)
 			TF_TBL_SCOPE_INVALID;
 	}
 }
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+
+	/* Check that id is valid */
+	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
+	if (i < 0)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			 struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(session,
+					     dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_em.h */
+int
+tf_alloc_eem_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_table *em_tables;
+	int index;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+
+	/* check parameters */
+	if (parms == NULL || tfp->session == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	index = ba_alloc(session->tbl_scope_pool_rx);
+	if (index == -1) {
+		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+			    "Control Block\n");
+		return -ENOMEM;
+	}
+
+	tbl_scope_cb = &session->tbl_scopes[index];
+	tbl_scope_cb->index = index;
+	tbl_scope_cb->tbl_scope_id = index;
+	parms->tbl_scope_id = index;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"EEM: Unable to query for EEM capability\n");
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx\n");
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[KEY0_TABLE].num_entries,
+				   em_tables[KEY0_TABLE].ctx_id,
+				   em_tables[KEY1_TABLE].ctx_id,
+				   em_tables[RECORD_TABLE].ctx_id,
+				   em_tables[EFC_TABLE].ctx_id,
+				   dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"TBL: Unable to configure EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(session,
+						 dir,
+						 tbl_scope_cb,
+						 index,
+						 TF_EXT_POOL_ENTRY_CNT,
+						 TF_EXT_POOL_ENTRY_SZ_BYTES);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "%d TBL: Unable to allocate idx pools %s\n",
+				    dir,
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = index;
+	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+	return -EINVAL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	if (tfp == NULL || parms == NULL || parms->data == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		void *base_addr;
+		uint32_t offset = TF_ACT_REC_INDEX_2_OFFSET(parms->idx);
+		uint32_t tbl_scope_id;
+
+		session = (struct tf_session *)(tfp->session->core_data);
+
+		tbl_scope_id =
+			session->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+
+		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Table scope not allocated\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		/* Get the table scope control block associated with the
+		 * external pool
+		 */
+
+		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
+
+		if (tbl_scope_cb == NULL)
+			return -EINVAL;
+
+		/* External table, implicitly the Action table */
+		base_addr = tf_em_get_table_page(tbl_scope_cb,
+						 parms->dir,
+						 offset,
+						 RECORD_TABLE);
+		if (base_addr == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Base address lookup failed\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		offset %= TF_EM_PAGE_SIZE;
+		rte_memcpy((char *)base_addr + offset,
+			   parms->data,
+			   parms->data_sz_in_bytes);
+	} else {
+		/* Internal table type processing */
+		rc = tf_set_tbl_entry_internal(tfp, parms);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Set failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+		}
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, External table type not supported\n",
+			    parms->dir);
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_tbl_entry_internal(tfp, parms);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Get failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_eem_tbl_scope(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	/* free table scope and all associated resources */
+	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow copy support for external tables, allocate and return
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
+		/* Entry found and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow of external tables so just free the entry
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_free_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_free_tbl_entry_shadow(tfs, parms);
+		/* Entry free'ed and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
+
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 5a5e72f..cb7ce9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,6 +7,7 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+#include "stack.h"
 
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
@@ -15,6 +16,48 @@ enum tf_pg_tbl_lvl {
 	PT_LVL_MAX
 };
 
+enum tf_em_table_type {
+	KEY0_TABLE,
+	KEY1_TABLE,
+	RECORD_TABLE,
+	EFC_TABLE,
+	MAX_TABLE
+};
+
+struct tf_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct tf_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+};
+
+struct tf_em_ctx_mem_info {
+	struct tf_em_table		em_tables[MAX_TABLE];
+};
+
+/** table scope control block content */
+struct tf_em_caps {
+	uint32_t flags;
+	uint32_t supported;
+	uint32_t max_entries_supported;
+	uint16_t key_entry_size;
+	uint16_t record_entry_size;
+	uint16_t efc_entry_size;
+};
+
 /** Invalid table scope id */
 #define TF_TBL_SCOPE_INVALID 0xffffffff
 
@@ -27,9 +70,49 @@ enum tf_pg_tbl_lvl {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
+	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps          em_caps[TF_DIR_MAX];
+	struct stack               ext_pool[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
+ */
+#define PAGE_SHIFT 22 /** 2M */
+
+#if (PAGE_SHIFT < 12)				/** < 4K >> 4K */
+#define TF_EM_PAGE_SHIFT 12
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SHIFT <= 13)			/** 4K, 8K */
+#define TF_EM_PAGE_SHIFT 13
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
+#define TF_EM_PAGE_SHIFT 15
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
+#elif (PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
+#define TF_EM_PAGE_SHIFT 16
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
+#define TF_EM_PAGE_SHIFT 18
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SHIFT <= 21)			/** 1M */
+#define TF_EM_PAGE_SHIFT 20
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SHIFT <= 22)			/** 2M, 4M */
+#define TF_EM_PAGE_SHIFT 21
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
+#define TF_EM_PAGE_SHIFT 22
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#else						/** >= 1G >> 1G */
+#define TF_EM_PAGE_SHIFT	30
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /**
  * Initialize table pool structure to indicate
  * no table scope has been associated with the
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 13/33] net/bnxt: fetch SVIF information from the firmware
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (11 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 12/33] net/bnxt: add EM/EEM functionality Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 14/33] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
                   ` (20 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

SVIF (source virtual interface) is used to represent a physical port,
physical function, or a virtual function. SVIF is compared during L2
context and exact match lookups in TX direction. SVIF is masked for
port information during L2 context and exact match lookup in RX direction.
Hence, driver needs this SVIF information to program L2 context and Exact
match tables.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  7 +++++++
 drivers/net/bnxt/bnxt_ethdev.c | 14 ++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 34 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 56 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0142acb..dd3cde7 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -684,6 +684,10 @@ struct bnxt {
 /* TCAM and EM should be 16-bit only. Other modes not supported. */
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
+
+#define	BNXT_SVIF_INVALID	0xFFFF
+	uint16_t		func_svif;
+	uint16_t		port_svif;
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW
 	struct tf               tfp;
 #endif
@@ -727,4 +731,7 @@ extern int bnxt_logtype_driver;
 
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
+
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
+
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 93d0062..f3cc745 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4696,6 +4696,18 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 	ALLOW_FUNC(HWRM_VNIC_TPA_CFG);
 }
 
+uint16_t
+bnxt_get_svif(uint16_t port_id, bool func_svif)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	bp = eth_dev->data->dev_private;
+
+	return func_svif ? bp->func_svif : bp->port_svif;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
@@ -4731,6 +4743,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 	if (rc)
 		return rc;
 
+	bnxt_hwrm_port_mac_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 5f0c13e..dd47908 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3014,6 +3014,8 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t flags;
 	int rc = 0;
+	bp->func_svif = BNXT_SVIF_INVALID;
+	uint16_t svif_info;
 
 	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
@@ -3024,6 +3026,12 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	/* Hard Coded.. 0xfff VLAN ID mask */
 	bp->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff;
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
+		bp->func_svif =	svif_info &
+				     HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
 	flags = rte_le_to_cpu_16(resp->flags);
 	if (BNXT_PF(bp) && (flags & HWRM_FUNC_QCFG_OUTPUT_FLAGS_MULTI_HOST))
 		bp->flags |= BNXT_FLAG_MULTI_HOST;
@@ -3060,6 +3068,32 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
+{
+	struct hwrm_port_mac_qcfg_input req = {0};
+	struct hwrm_port_mac_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t port_svif_info;
+	int rc;
+
+	bp->port_svif = BNXT_SVIF_INVALID;
+
+	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
+	if (port_svif_info &
+	    HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID)
+		bp->port_svif = port_svif_info &
+			HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static void copy_func_cfg_to_qcaps(struct hwrm_func_cfg_input *fcfg,
 				   struct hwrm_func_qcaps_output *qcaps)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index df7aa74..0079d8a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -193,6 +193,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp);
 int bnxt_hwrm_port_clr_stats(struct bnxt *bp);
 int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on);
 int bnxt_hwrm_port_led_qcaps(struct bnxt *bp);
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp);
 int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 					uint32_t flags);
 void vf_vnic_set_rxmask_cb(struct bnxt_vnic_info *vnic, void *flagp);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 14/33] net/bnxt: fetch vnic info from DPDK port
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (12 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 13/33] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 15/33] net/bnxt: add support for ULP session manager init Venkat Duvvuru
                   ` (19 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

VNIC is needed for the driver to program the action record for rx
flows. VNIC determines what receive rings to use to place the received
packets. This patch introduces a routine that will convert a given
dpdk port to VNIC.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 15 +++++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index dd3cde7..fbf6340 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -732,6 +732,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f3cc745..57ed90f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4708,6 +4708,21 @@ bnxt_get_svif(uint16_t port_id, bool func_svif)
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
+uint16_t
+bnxt_get_vnic_id(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt_vnic_info *vnic;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	bp = eth_dev->data->dev_private;
+
+	vnic = BNXT_GET_DEFAULT_VNIC(bp);
+
+	return vnic->fw_vnic_id;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 15/33] net/bnxt: add support for ULP session manager init
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (13 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 14/33] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 16/33] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
                   ` (18 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   6 +-
 drivers/net/bnxt/bnxt.h                       |   8 +-
 drivers/net/bnxt/bnxt_ethdev.c                |   5 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 527 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            | 100 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 187 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  77 ++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  94 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h        |  49 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  28 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  35 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  40 ++
 13 files changed, 1189 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index c950c6d..87c61bf 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -44,7 +44,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core -I$(SRCDIR)/tf_ulp
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
@@ -57,6 +57,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/bnxt_ulp.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mark_mgr.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_flow_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_template_db.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index fbf6340..3cb8ba3 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -23,6 +23,7 @@
 
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW
 #include "tf_core.h"
+#include "bnxt_ulp.h"
 #endif
 
 /* Vendor ID */
@@ -689,7 +690,8 @@ struct bnxt {
 	uint16_t		func_svif;
 	uint16_t		port_svif;
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW
-	struct tf               tfp;
+	struct tf		tfp;
+	struct bnxt_ulp_context	ulp_ctx;
 #endif
 };
 
@@ -731,6 +733,10 @@ extern int bnxt_logtype_driver;
 
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+int32_t bnxt_ulp_init(struct bnxt *bp);
+void bnxt_ulp_deinit(struct bnxt *bp);
+#endif
 
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 57ed90f..3d19894 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -891,6 +891,11 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 	pthread_mutex_lock(&bp->def_cp_lock);
 	bnxt_schedule_fw_health_check(bp);
 	pthread_mutex_unlock(&bp->def_cp_lock);
+
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+	bnxt_ulp_init(bp);
+#endif
+
 	return 0;
 
 error:
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
new file mode 100644
index 0000000..3516df4
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_TF_COMMON_H_
+#define _BNXT_TF_COMMON_H_
+
+#define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
+
+#define BNXT_ULP_EM_FLOWS			8192
+#define BNXT_ULP_1M_FLOWS			1000000
+#define BNXT_EEM_RX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_TX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_HASH_KEY2_USED			0x8000000
+#define BNXT_EEM_RX_HW_HASH_KEY2_BIT		BNXT_ULP_1M_FLOWS
+#define	BNXT_ULP_DFLT_RX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_RX_MEM			0
+#define	BNXT_ULP_RX_NUM_FLOWS			32
+#define	BNXT_ULP_RX_TBL_IF_ID			0
+#define	BNXT_ULP_DFLT_TX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_TX_MEM			0
+#define	BNXT_ULP_TX_NUM_FLOWS			32
+#define	BNXT_ULP_TX_TBL_IF_ID			0
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl);
+
+#endif /* _BNXT_TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
new file mode 100644
index 0000000..7afc6bf
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -0,0 +1,527 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
+
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_ext_flow_handle.h"
+
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+
+/* Linked list of all TF sessions. */
+STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
+			STAILQ_HEAD_INITIALIZER(bnxt_ulp_session_list);
+
+/* Mutex to synchronize bnxt_ulp_session_list operations. */
+static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/*
+ * Initialize an ULP session.
+ * An ULP session will contain all the resources needed to support rte flow
+ * offloads. A session is initialized as part of rte_eth_device start.
+ * A single vswitch instance can have multiple uplinks which means
+ * rte_eth_device start will be called for each of these devices.
+ * ULP session manager will make sure that a single ULP session is only
+ * initialized once. Apart from this, it also initializes MARK database,
+ * EEM table & flow database. ULP session manager also manages a list of
+ * all opened ULP sessions.
+ */
+static int32_t
+ulp_ctx_session_open(struct bnxt *bp,
+		     struct bnxt_ulp_session_state *session)
+{
+	struct rte_eth_dev		*ethdev = bp->eth_dev;
+	int32_t				rc = 0;
+	struct tf_open_session_parms	params;
+
+	memset(&params, 0, sizeof(params));
+
+	rc = rte_eth_dev_get_name_by_port(ethdev->data->port_id,
+					  params.ctrl_chan_name);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Invalid port %d, rc = %d\n",
+			    ethdev->data->port_id, rc);
+		return rc;
+	}
+
+	rc = tf_open_session(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
+			    params.ctrl_chan_name, rc);
+		return -EINVAL;
+	}
+	session->session_opened = 1;
+	session->g_tfp = &bp->tfp;
+	return rc;
+}
+
+static void
+bnxt_init_tbl_scope_parms(struct bnxt *bp,
+			  struct tf_alloc_tbl_scope_parms *params)
+{
+	struct bnxt_ulp_device_params	*dparms;
+	uint32_t dev_id;
+	int rc;
+
+	rc = bnxt_ulp_cntxt_dev_id_get(&bp->ulp_ctx, &dev_id);
+	if (rc)
+		/* TBD: For now, just use default. */
+		dparms = 0;
+	else
+		dparms = bnxt_ulp_device_params_get(dev_id);
+
+	if (!dparms) {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = BNXT_ULP_RX_NUM_FLOWS;
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = BNXT_ULP_TX_NUM_FLOWS;
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	} else {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = dparms->num_flows / (1024);
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = dparms->num_flows / (1024);
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	}
+}
+
+/* Initialize Extended Exact Match host memory. */
+static int32_t
+ulp_eem_tbl_scope_init(struct bnxt *bp)
+{
+	struct tf_alloc_tbl_scope_parms params = {0};
+	int rc;
+
+	bnxt_init_tbl_scope_parms(bp, &params);
+
+	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate eem table scope rc = %d\n",
+			    rc);
+		return rc;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_set(&bp->ulp_ctx, params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to set table scope id\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/* The function to free and deinit the ulp context data. */
+static int32_t
+ulp_ctx_deinit(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Free the contents */
+	if (session->cfg_data) {
+		rte_free(session->cfg_data);
+		bp->ulp_ctx.cfg_data = NULL;
+		session->cfg_data = NULL;
+	}
+	return 0;
+}
+
+/* The function to allocate and initialize the ulp context data. */
+static int32_t
+ulp_ctx_init(struct bnxt *bp,
+	     struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_data	*ulp_data;
+	int32_t			rc = 0;
+
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Allocate memory to hold ulp context data. */
+	ulp_data = rte_zmalloc("bnxt_ulp_data",
+			       sizeof(struct bnxt_ulp_data), 0);
+	if (!ulp_data) {
+		BNXT_TF_DBG(ERR, "Failed to allocate memory for ulp data\n");
+		return -ENOMEM;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	bp->ulp_ctx.cfg_data = ulp_data;
+	session->cfg_data = ulp_data;
+	ulp_data->ref_cnt++;
+
+	/* Open the ulp session. */
+	rc = ulp_ctx_session_open(bp, session);
+	if (rc) {
+		(void)ulp_ctx_deinit(bp, session);
+		return rc;
+	}
+	bnxt_ulp_cntxt_tfp_set(&bp->ulp_ctx, session->g_tfp);
+	return rc;
+}
+
+static int32_t
+ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!ulp_ctx || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	ulp_ctx->cfg_data = session->cfg_data;
+	ulp_ctx->cfg_data->ref_cnt++;
+
+	/* TBD call TF_session_attach. */
+	ulp_ctx->g_tfp = session->g_tfp;
+	return 0;
+}
+
+/*
+ * Initialize the state of an ULP session.
+ * If the state of an ULP session is not initialized, set it's state to
+ * initialized. If the state is already initialized, do nothing.
+ */
+static void
+ulp_context_initialized(struct bnxt_ulp_session_state *session, bool *init)
+{
+	pthread_mutex_lock(&session->bnxt_ulp_mutex);
+
+	if (!session->bnxt_ulp_init) {
+		session->bnxt_ulp_init = true;
+		*init = false;
+	} else {
+		*init = true;
+	}
+
+	pthread_mutex_unlock(&session->bnxt_ulp_mutex);
+}
+
+/*
+ * Check if an ULP session is already allocated for a specific PCI
+ * domain & bus. If it is already allocated simply return the session
+ * pointer, otherwise allocate a new session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_get_session(struct rte_pci_addr *pci_addr)
+{
+	struct bnxt_ulp_session_state *session;
+
+	STAILQ_FOREACH(session, &bnxt_ulp_session_list, next) {
+		if (session->pci_info.domain == pci_addr->domain &&
+		    session->pci_info.bus == pci_addr->bus) {
+			return session;
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Allocate and Initialize an ULP session and set it's state to INITIALIZED.
+ * If it's already initialized simply return the already existing session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_session_init(struct bnxt *bp,
+		 bool *init)
+{
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+	struct bnxt_ulp_session_state	*session;
+
+	if (!bp)
+		return NULL;
+
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+
+	session = ulp_get_session(pci_addr);
+	if (!session) {
+		/* Not Found the session  Allocate a new one */
+		session = rte_zmalloc("bnxt_ulp_session",
+				      sizeof(struct bnxt_ulp_session_state),
+				      0);
+		if (!session) {
+			BNXT_TF_DBG(ERR,
+				    "Allocation failed for bnxt_ulp_session\n");
+			pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+			return NULL;
+
+		} else {
+			/* Add it to the queue */
+			session->pci_info.domain = pci_addr->domain;
+			session->pci_info.bus = pci_addr->bus;
+			pthread_mutex_init(&session->bnxt_ulp_mutex, NULL);
+			STAILQ_INSERT_TAIL(&bnxt_ulp_session_list,
+					   session, next);
+		}
+	}
+	ulp_context_initialized(session, init);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	return session;
+}
+
+/*
+ * When a port is initialized by dpdk. This functions is called
+ * and this function initializes the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+int32_t
+bnxt_ulp_init(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state *session;
+	bool init;
+	int rc;
+
+	/*
+	 * Multiple uplink ports can be associated with a single vswitch.
+	 * Make sure only the port that is started first will initialize
+	 * the TF session.
+	 */
+	session = ulp_session_init(bp, &init);
+	if (!session) {
+		BNXT_TF_DBG(ERR, "Failed to initialize the tf session\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * If ULP is already initialized for a specific domain then simply
+	 * assign the ulp context to this rte_eth_dev.
+	 */
+	if (init) {
+		rc = ulp_ctx_attach(&bp->ulp_ctx, session);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Failed to attach the ulp context\n");
+		}
+		return rc;
+	}
+
+	/* Allocate and Initialize the ulp context. */
+	rc = ulp_ctx_init(bp, session);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the ulp context\n");
+		goto jump_to_error;
+	}
+
+	/* Create the Mark database. */
+	rc = ulp_mark_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the mark database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the flow database. */
+	rc = ulp_flow_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the flow database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the eem table scope. */
+	rc = ulp_eem_tbl_scope_init(bp);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the eem scope table\n");
+		goto jump_to_error;
+	}
+
+	return rc;
+
+jump_to_error:
+	return -ENOMEM;
+}
+
+/* Below are the access functions to access internal data of ulp context. */
+
+/* Function to set the Mark DB into the context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->mark_tbl = mark_tbl;
+
+	return 0;
+}
+
+/* Function to retrieve the Mark DB from the context. */
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->mark_tbl;
+}
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->dev_id = dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t *dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*dev_id = ulp_ctx->cfg_data->dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*tbl_scope_id = ulp_ctx->cfg_data->tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->tbl_scope_id = tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the tfp session details from the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return -EINVAL;
+	}
+
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	ulp->g_tfp = tfp;
+	return 0;
+}
+
+/* Function to get the tfp session details from the ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	return ulp->g_tfp;
+}
+
+/*
+ * Get the device table entry based on the device id.
+ *
+ * dev_id [in] The device id of the hardware
+ *
+ * Returns the pointer to the device parameters.
+ */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id)
+{
+	if (dev_id < BNXT_ULP_MAX_NUM_DEVICES)
+		return &ulp_device_params[dev_id];
+	return NULL;
+}
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->flow_db = flow_db;
+	return 0;
+}
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return NULL;
+	}
+
+	return ulp_ctx->cfg_data->flow_db;
+}
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev	*dev)
+{
+	struct bnxt	*bp;
+
+	bp = (struct bnxt *)dev->data->dev_private;
+	if (!bp) {
+		BNXT_TF_DBG(ERR, "Bnxt private data is not initialized\n");
+		return NULL;
+	}
+	return &bp->ulp_ctx;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
new file mode 100644
index 0000000..d88225f
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_ULP_H_
+#define _BNXT_ULP_H_
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include "rte_ethdev.h"
+
+struct bnxt_ulp_data {
+	uint32_t			tbl_scope_id;
+	struct bnxt_ulp_mark_tbl	*mark_tbl;
+	uint32_t			dev_id; /* Hardware device id */
+	uint32_t			ref_cnt;
+	struct bnxt_ulp_flow_db		*flow_db;
+};
+
+struct bnxt_ulp_context {
+	struct bnxt_ulp_data	*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf		*g_tfp;
+};
+
+struct bnxt_ulp_pci_info {
+	uint32_t	domain;
+	uint8_t		bus;
+};
+
+struct bnxt_ulp_session_state {
+	STAILQ_ENTRY(bnxt_ulp_session_state)	next;
+	bool					bnxt_ulp_init;
+	pthread_mutex_t				bnxt_ulp_mutex;
+	struct bnxt_ulp_pci_info		pci_info;
+	struct bnxt_ulp_data			*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf				*g_tfp;
+	uint32_t				session_opened;
+};
+
+/* ULP flow id structure */
+struct rte_tf_flow {
+	uint32_t	flow_id;
+};
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx, uint32_t *dev_id);
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id);
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id);
+
+/* Function to set the tfp session details in the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp);
+
+/* Function to get the tfp session details from ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp);
+
+/* Get the device table entry based on the device id. */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id);
+
+int32_t
+bnxt_ulp_ctxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+			       struct bnxt_ulp_mark_tbl *mark_tbl);
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_ctxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db);
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx);
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev *dev);
+
+#endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
new file mode 100644
index 0000000..3dd39c1
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_malloc.h>
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Helper function to allocate the flow table and initialize
+ * the stack for allocation operations.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+static int32_t
+ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			   enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	uint32_t			idx = 0;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	uint32_t			size;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	size = sizeof(struct ulp_fdb_resource_info) * flow_tbl->num_resources;
+	flow_tbl->flow_resources =
+			rte_zmalloc("ulp_fdb_resource_info", size, 0);
+
+	if (!flow_tbl->flow_resources) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory for flow table\n");
+		return -ENOMEM;
+	}
+	size = sizeof(uint32_t) * flow_tbl->num_resources;
+	flow_tbl->flow_tbl_stack = rte_zmalloc("flow_tbl_stack", size, 0);
+	if (!flow_tbl->flow_tbl_stack) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory flow tbl stack\n");
+		return -ENOMEM;
+	}
+	size = (flow_tbl->num_flows / sizeof(uint64_t)) + 1;
+	flow_tbl->active_flow_tbl = rte_zmalloc("active flow tbl", size, 0);
+	if (!flow_tbl->active_flow_tbl) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory active tbl\n");
+		return -ENOMEM;
+	}
+
+	/* Initialize the stack table. */
+	for (idx = 0; idx < flow_tbl->num_resources; idx++)
+		flow_tbl->flow_tbl_stack[idx] = idx;
+
+	/* Ignore the first element in the list. */
+	flow_tbl->head_index = 1;
+	/* Tail points to the last entry in the list. */
+	flow_tbl->tail_index = flow_tbl->num_resources - 1;
+	return 0;
+}
+
+/*
+ * Helper function to de allocate the flow table.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns none.
+ */
+static void
+ulp_flow_db_dealloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			     enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* Free all the allocated tables in the flow table. */
+	if (flow_tbl->active_flow_tbl) {
+		rte_free(flow_tbl->active_flow_tbl);
+		flow_tbl->active_flow_tbl = NULL;
+	}
+
+	if (flow_tbl->flow_tbl_stack) {
+		rte_free(flow_tbl->flow_tbl_stack);
+		flow_tbl->flow_tbl_stack = NULL;
+	}
+
+	if (flow_tbl->flow_resources) {
+		rte_free(flow_tbl->flow_resources);
+		flow_tbl->flow_resources = NULL;
+	}
+}
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_device_params		*dparms;
+	struct bnxt_ulp_flow_tbl		*flow_tbl;
+	struct bnxt_ulp_flow_db			*flow_db;
+	uint32_t				dev_id;
+
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctxt, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	flow_db = rte_zmalloc("bnxt_ulp_flow_db",
+			      sizeof(struct bnxt_ulp_flow_db), 0);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate memory for flow table ptr\n");
+		goto error_free;
+	}
+
+	/* Attach the flow database to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, flow_db);
+
+	/* Populate the regular flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_REGULAR_FLOW_TABLE];
+	flow_tbl->num_flows = dparms->num_flows + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   dparms->num_resources_per_flow);
+
+	/* Populate the default flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_DEFAULT_FLOW_TABLE];
+	flow_tbl->num_flows = BNXT_FLOW_DB_DEFAULT_NUM_FLOWS + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES);
+
+	/* Allocate the resource for the regular flow table. */
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE))
+		goto error_free;
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE))
+		goto error_free;
+
+	/* All good so return. */
+	return 0;
+error_free:
+	ulp_flow_db_deinit(ulp_ctxt);
+	return -ENOMEM;
+}
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_flow_db			*flow_db;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Detach the flow database from the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, NULL);
+
+	/* Free up all the memory. */
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE);
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE);
+	rte_free(flow_db);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
new file mode 100644
index 0000000..a2ee8fa
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FLOW_DB_H_
+#define _ULP_FLOW_DB_H_
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db.h"
+
+#define BNXT_FLOW_DB_DEFAULT_NUM_FLOWS		128
+#define BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES	5
+
+/* Structure for the flow database resource information. */
+struct ulp_fdb_resource_info {
+	/* Points to next resource in the chained list. */
+	uint32_t	nxt_resource_idx;
+	union {
+		uint64_t	resource_em_handle;
+		struct {
+			uint32_t	resource_type;
+			uint32_t	resource_hndl;
+		};
+	};
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_tbl {
+	/* Flow tbl is the resource object list for each flow id. */
+	struct ulp_fdb_resource_info	*flow_resources;
+
+	/* Flow table stack to track free list of resources. */
+	uint32_t	*flow_tbl_stack;
+	uint32_t	head_index;
+	uint32_t	tail_index;
+
+	/* Table to track the active flows. */
+	uint64_t	*active_flow_tbl;
+	uint32_t	num_flows;
+	uint32_t	num_resources;
+};
+
+/* Flow database supports two tables. */
+enum bnxt_ulp_flow_db_tables {
+	BNXT_ULP_REGULAR_FLOW_TABLE,
+	BNXT_ULP_DEFAULT_FLOW_TABLE,
+	BNXT_ULP_FLOW_TABLE_MAX
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_db {
+	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
+};
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
+
+#endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
new file mode 100644
index 0000000..3f28a73
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include "bnxt_ulp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "bnxt_tf_common.h"
+#include "../bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ *
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	struct bnxt_ulp_mark_tbl *mark_tbl = NULL;
+	uint32_t dev_id;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	mark_tbl = rte_zmalloc("ulp_rx_mark_tbl_ptr",
+			       sizeof(struct bnxt_ulp_mark_tbl), 0);
+	if (!mark_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->lfid_tbl = rte_zmalloc("ulp_rx_em_flow_mark_table",
+					 dparms->lfid_entries *
+					    sizeof(struct bnxt_lfid_mark_info),
+					 0);
+
+	if (!mark_tbl->lfid_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
+					 2 * dparms->num_flows *
+					    sizeof(struct bnxt_gfid_mark_info),
+					 0);
+	if (!mark_tbl->gfid_tbl)
+		goto mem_error;
+
+	/*
+	 * TBD: This needs to be generalized for better mark handling
+	 * These values are used to compress the FID to the allowable index
+	 * space.  The FID from hw may be the full hash.
+	 */
+	mark_tbl->gfid_max	= dparms->gfid_entries - 1;
+	mark_tbl->gfid_mask	= (dparms->gfid_entries / 2) - 1;
+	mark_tbl->gfid_type_bit = (dparms->gfid_entries / 2);
+
+	BNXT_TF_DBG(DEBUG, "GFID Max = 0x%08x\nGFID MASK = 0x%08x\n",
+		    mark_tbl->gfid_max,
+		    mark_tbl->gfid_mask);
+
+	/* Add the mart tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
+
+	return 0;
+
+mem_error:
+	rte_free(mark_tbl->gfid_tbl);
+	rte_free(mark_tbl->lfid_tbl);
+	rte_free(mark_tbl);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for mark mgr\n");
+
+	return -ENOMEM;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
new file mode 100644
index 0000000..b175abd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MARK_MGR_H_
+#define _ULP_MARK_MGR_H_
+
+#include "bnxt_ulp.h"
+
+#define ULP_MARK_INVALID (0)
+struct bnxt_lfid_mark_info {
+	uint16_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_gfid_mark_info {
+	uint32_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_ulp_mark_tbl {
+	struct bnxt_lfid_mark_info	*lfid_tbl;
+	struct bnxt_gfid_mark_info	*gfid_tbl;
+	uint32_t			gfid_mask;
+	uint32_t			gfid_type_bit;
+	uint32_t			gfid_max;
+};
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * Initialize MARK database for GFID & LFID tables
+ * GFID: Global flow id which is based on EEM hash id.
+ * LFID: Local flow id which is the CFA action pointer.
+ * GFID is used for EEM flows, LFID is used for EM flows.
+ *
+ * Flow mapper modules adds mark_id in the MARK database.
+ *
+ * BNXT PMD receive handler extracts the hardware flow id from the
+ * received completion record. Fetches mark_id from the MARK
+ * database using the flow id. Injects mark_id into the packet's mbuf.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
new file mode 100644
index 0000000..bc0ffd3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#include "ulp_template_db.h"
+#include "ulp_template_field_db.h"
+#include "ulp_template_struct.h"
+
+struct bnxt_ulp_device_params ulp_device_params[] = {
+	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
+		.global_fid_enable       = BNXT_ULP_SYM_YES,
+		.byte_order              = BNXT_ULP_SYM_LITTLE_ENDIAN,
+		.encap_byte_swap         = 1,
+		.lfid_entries            = 16384,
+		.lfid_entry_size         = 4,
+		.gfid_entries            = 65536,
+		.gfid_entry_size         = 4,
+		.num_flows               = 32768,
+		.num_resources_per_flow  = 8
+	}
+};
+
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
new file mode 100644
index 0000000..ba2a101
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#ifndef ULP_TEMPLATE_DB_H_
+#define ULP_TEMPLATE_DB_H_
+
+#define BNXT_ULP_MAX_NUM_DEVICES 4
+
+enum bnxt_ulp_byte_order {
+	BNXT_ULP_BYTE_ORDER_BE,
+	BNXT_ULP_BYTE_ORDER_LE,
+	BNXT_ULP_BYTE_ORDER_LAST
+};
+
+enum bnxt_ulp_device_id {
+	BNXT_ULP_DEVICE_ID_WH_PLUS,
+	BNXT_ULP_DEVICE_ID_THOR,
+	BNXT_ULP_DEVICE_ID_STINGRAY,
+	BNXT_ULP_DEVICE_ID_STINGRAY2,
+	BNXT_ULP_DEVICE_ID_LAST
+};
+
+enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_YES = 1
+};
+
+#endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
new file mode 100644
index 0000000..4b9d0b2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_TEMPLATE_STRUCT_H_
+#define _ULP_TEMPLATE_STRUCT_H_
+
+#include <stdint.h>
+#include "rte_ether.h"
+#include "rte_icmp.h"
+#include "rte_ip.h"
+#include "rte_tcp.h"
+#include "rte_udp.h"
+#include "rte_esp.h"
+#include "rte_sctp.h"
+#include "rte_flow.h"
+#include "tf_core.h"
+
+/* Device specific parameters. */
+struct bnxt_ulp_device_params {
+	uint8_t				description[16];
+	uint32_t			global_fid_enable;
+	enum bnxt_ulp_byte_order	byte_order;
+	uint8_t				encap_byte_swap;
+	uint32_t			lfid_entries;
+	uint32_t			lfid_entry_size;
+	uint64_t			gfid_entries;
+	uint32_t			gfid_entry_size;
+	uint64_t			num_flows;
+	uint32_t			num_resources_per_flow;
+};
+
+/*
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
+ */
+extern struct bnxt_ulp_device_params ulp_device_params[];
+
+#endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 16/33] net/bnxt: add support for ULP session manager cleanup
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (14 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 15/33] net/bnxt: add support for ULP session manager init Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 17/33] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
                   ` (17 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

This patch adds support for cleaning up resources initialized for ULP
sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c         |   4 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c     | 167 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h     |  10 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  25 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |   8 ++
 5 files changed, 213 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 3d19894..2064f21 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -939,6 +939,10 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+	bnxt_ulp_deinit(bp);
+#endif
+
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 7afc6bf..3795c6d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -28,6 +28,27 @@ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
 static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 /*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *ptr)
+{
+	struct bnxt *bp = (struct bnxt *)ptr;
+
+	if (!bp)
+		return 0;
+
+	if (&bp->tfp == bp->ulp_ctx.g_tfp)
+		return 1;
+
+	return 0;
+}
+
+/*
  * Initialize an ULP session.
  * An ULP session will contain all the resources needed to support rte flow
  * offloads. A session is initialized as part of rte_eth_device start.
@@ -67,6 +88,22 @@ ulp_ctx_session_open(struct bnxt *bp,
 	return rc;
 }
 
+/*
+ * Close the ULP session.
+ * It takes the ulp context pointer.
+ */
+static void
+ulp_ctx_session_close(struct bnxt *bp,
+		      struct bnxt_ulp_session_state *session)
+{
+	/* close the session in the hardware */
+	if (session->session_opened)
+		tf_close_session(&bp->tfp);
+	session->session_opened = 0;
+	session->g_tfp = NULL;
+	bp->ulp_ctx.g_tfp = NULL;
+}
+
 static void
 bnxt_init_tbl_scope_parms(struct bnxt *bp,
 			  struct tf_alloc_tbl_scope_parms *params)
@@ -138,6 +175,41 @@ ulp_eem_tbl_scope_init(struct bnxt *bp)
 	return 0;
 }
 
+/* Free Extended Exact Match host memory */
+static int32_t
+ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
+{
+	struct tf_free_tbl_scope_parms	params = {0};
+	struct tf			*tfp;
+	int32_t				rc = 0;
+
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return -EINVAL;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return -EINVAL;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
+		return -EINVAL;
+	}
+
+	rc = tf_free_tbl_scope(tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to free table scope\n");
+		return -EINVAL;
+	}
+	return rc;
+}
+
 /* The function to free and deinit the ulp context data. */
 static int32_t
 ulp_ctx_deinit(struct bnxt *bp,
@@ -148,6 +220,9 @@ ulp_ctx_deinit(struct bnxt *bp,
 		return -EINVAL;
 	}
 
+	/* close the tf session */
+	ulp_ctx_session_close(bp, session);
+
 	/* Free the contents */
 	if (session->cfg_data) {
 		rte_free(session->cfg_data);
@@ -211,6 +286,36 @@ ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
 	return 0;
 }
 
+static int32_t
+ulp_ctx_detach(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+
+	if (!bp || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	ulp_ctx = &bp->ulp_ctx;
+
+	if (!ulp_ctx->cfg_data)
+		return 0;
+
+	/* TBD call TF_session_detach */
+
+	/* Increment the ulp context data reference count usage. */
+	if (ulp_ctx->cfg_data->ref_cnt >= 1) {
+		ulp_ctx->cfg_data->ref_cnt--;
+		if (ulp_ctx_deinit_allowed(bp))
+			ulp_ctx_deinit(bp, session);
+		ulp_ctx->cfg_data = NULL;
+		ulp_ctx->g_tfp = NULL;
+		return 0;
+	}
+	BNXT_TF_DBG(ERR, "context deatach on invalid data\n");
+	return 0;
+}
+
 /*
  * Initialize the state of an ULP session.
  * If the state of an ULP session is not initialized, set it's state to
@@ -297,6 +402,26 @@ ulp_session_init(struct bnxt *bp,
 }
 
 /*
+ * When a device is closed, remove it's associated session from the global
+ * session list.
+ */
+static void
+ulp_session_deinit(struct bnxt_ulp_session_state *session)
+{
+	if (!session)
+		return;
+
+	if (!session->cfg_data) {
+		pthread_mutex_lock(&bnxt_ulp_global_mutex);
+		STAILQ_REMOVE(&bnxt_ulp_session_list, session,
+			      bnxt_ulp_session_state, next);
+		pthread_mutex_destroy(&session->bnxt_ulp_mutex);
+		rte_free(session);
+		pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	}
+}
+
+/*
  * When a port is initialized by dpdk. This functions is called
  * and this function initializes the ULP context and rest of the
  * infrastructure associated with it.
@@ -363,12 +488,52 @@ bnxt_ulp_init(struct bnxt *bp)
 	return rc;
 
 jump_to_error:
+	bnxt_ulp_deinit(bp);
 	return -ENOMEM;
 }
 
 /* Below are the access functions to access internal data of ulp context. */
 
-/* Function to set the Mark DB into the context. */
+/*
+ * When a port is deinit'ed by dpdk. This function is called
+ * and this function clears the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+void
+bnxt_ulp_deinit(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state	*session;
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+
+	/* Get the session first */
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+	session = ulp_get_session(pci_addr);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+
+	/* session not found then just exit */
+	if (!session)
+		return;
+
+	/* cleanup the eem table scope */
+	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
+
+	/* cleanup the flow database */
+	ulp_flow_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the Mark database */
+	ulp_mark_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the ulp context and tf session */
+	ulp_ctx_detach(bp, session);
+
+	/* Finally delete the bnxt session*/
+	ulp_session_deinit(session);
+}
+
+/* Function to set the Mark DB into the context */
 int32_t
 bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
 				struct bnxt_ulp_mark_tbl *mark_tbl)
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index d88225f..b3e9e96 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -47,6 +47,16 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+/*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *bp);
+
 /* Function to set the device id of the hardware. */
 int32_t
 bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 3f28a73..9e4307e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -92,3 +92,28 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	return -ENOMEM;
 }
+
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+
+	if (mtbl) {
+		rte_free(mtbl->gfid_tbl);
+		rte_free(mtbl->lfid_tbl);
+		rte_free(mtbl);
+
+		/* Safe to ignore on deinit */
+		(void)bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, NULL);
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index b175abd..5948683 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -46,4 +46,12 @@ struct bnxt_ulp_mark_tbl {
 int32_t
 ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 17/33] net/bnxt: add helper functions for blob/regfile ops
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (15 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 16/33] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 18/33] net/bnxt: add support to process action tables Venkat Duvvuru
                   ` (16 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

1. blob routines for managing key/mask/result data
2. regfile routines for managing temporal data during flow
   construction

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   1 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.h |  12 +
 drivers/net/bnxt/tf_ulp/ulp_utils.c       | 521 ++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h       | 279 ++++++++++++++++
 4 files changed, 813 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 87c61bf..dcf1eb4 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -61,6 +61,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_template_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_utils.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index ba2a101..1eed828 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -27,6 +27,18 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST
 };
 
+enum bnxt_ulp_fmf_mask {
+	BNXT_ULP_FMF_MASK_IGNORE,
+	BNXT_ULP_FMF_MASK_ANY,
+	BNXT_ULP_FMF_MASK_EXACT,
+	BNXT_ULP_FMF_MASK_WILDCARD,
+	BNXT_ULP_FMF_MASK_LAST
+};
+
+enum bnxt_ulp_regfile_index {
+	BNXT_ULP_REGFILE_INDEX_LAST
+};
+
 enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
new file mode 100644
index 0000000..0d38cf3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -0,0 +1,521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_utils.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile)
+{
+	/* validate the arguments */
+	if (!regfile) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memset(regfile, 0, sizeof(struct ulp_regfile));
+	return 1; /* Success */
+}
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance. Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * data [in/out]
+ *
+ * returns size, zero on failure
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	*data = regfile->entry[field].data;
+	return sizeof(*data);
+}
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * size [in] The size in bytes of the value beingritten into this
+ * variable.
+ *
+ * returns 0 on fail
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST || field < 0) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	regfile->entry[field].data = data;
+	return sizeof(data); /* Success */
+}
+
+static void
+ulp_bs_put_msb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	int8_t shift;
+
+	tmp = bs[index];
+	mask = ((uint8_t)-1 >> (8 - bitlen));
+	shift = 8 - bitoffs - bitlen;
+	val &= mask;
+
+	if (shift >= 0) {
+		tmp &= ~(mask << shift);
+		tmp |= val << shift;
+		bs[index] = tmp;
+	} else {
+		tmp &= ~((uint8_t)-1 >> bitoffs);
+		tmp |= val >> -shift;
+		bs[index++] = tmp;
+
+		tmp = bs[index];
+		tmp &= ((uint8_t)-1 >> (bitlen - (8 - bitoffs)));
+		tmp |= val << (8 + shift);
+		bs[index] = tmp;
+	}
+}
+
+static void
+ulp_bs_put_lsb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	uint8_t shift;
+	uint8_t partial;
+
+	tmp = bs[index];
+	shift = bitoffs;
+
+	if (bitoffs + bitlen <= 8) {
+		mask = ((1 << bitlen) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index] = tmp;
+	} else {
+		partial = 8 - bitoffs;
+		mask = ((1 << partial) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index++] = tmp;
+
+		val >>= partial;
+		partial = bitlen - partial;
+		mask = ((1 << partial) - 1);
+		tmp = bs[index];
+		tmp &= ~mask;
+		tmp |= (val & mask);
+		bs[index] = tmp;
+	}
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_lsb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len) / 8;
+	int tlen = len;
+
+	if (cnt > 0 && !(len % 8))
+		cnt -= 1;
+
+	for (i = 0; i < cnt; i++) {
+		ulp_bs_put_lsb(bs, pos, 8, val[cnt - i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	/* Handle the remainder bits */
+	if (tlen)
+		ulp_bs_put_lsb(bs, pos, tlen, val[0]);
+	return len;
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_msb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len + 7) / 8;
+	int tlen = len;
+
+	/* Handle any remainder bits */
+	int tmp = len % 8;
+
+	if (!tmp)
+		tmp = 8;
+
+	ulp_bs_put_msb(bs, pos, tmp, val[0]);
+
+	pos += tmp;
+	tlen -= tmp;
+
+	for (i = 1; i < cnt; i++) {
+		ulp_bs_put_msb(bs, pos, 8, val[i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	return len;
+}
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order)
+{
+	/* validate the arguments */
+	if (!blob || bitlen > (8 * sizeof(blob->data))) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	blob->bitlen = bitlen;
+	blob->byte_order = order;
+	blob->write_idx = 0;
+	memset(blob->data, 0, sizeof(blob->data));
+	return 1; /* Success */
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+#define ULP_BLOB_BYTE		8
+#define ULP_BLOB_BYTE_HEX	0xFF
+#define BLOB_MASK_CAL(x)	((0xFF << (x)) & 0xFF)
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen)
+{
+	uint32_t rc;
+
+	/* validate the arguments */
+	if (!blob || datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	if (blob->byte_order == BNXT_ULP_BYTE_ORDER_BE)
+		rc = ulp_bs_push_msb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	else
+		rc = ulp_bs_push_lsb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Failed ro write blob\n");
+		return 0;
+	}
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen)
+{
+	uint8_t *val = (uint8_t *)data;
+	int rc;
+
+	int size = (datalen + 7) / 8;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	rc = ulp_blob_push(blob, &val[8 - size], datalen);
+	if (!rc)
+		return 0;
+
+	return &val[8 - size];
+}
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen)
+{
+	uint8_t		*val = (uint8_t *)data;
+	uint32_t	initial_size, write_size = datalen;
+	uint32_t	size = 0;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	initial_size = ULP_BYTE_2_BITS(sizeof(uint64_t)) -
+	    (blob->write_idx % ULP_BYTE_2_BITS(sizeof(uint64_t)));
+	while (write_size > 0) {
+		if (initial_size && write_size > initial_size) {
+			size = initial_size;
+			initial_size = 0;
+		} else if (initial_size && write_size <= initial_size) {
+			size = write_size;
+			initial_size = 0;
+		} else if (write_size > ULP_BYTE_2_BITS(sizeof(uint64_t))) {
+			size = ULP_BYTE_2_BITS(sizeof(uint64_t));
+		} else {
+			size = write_size;
+		}
+		if (!ulp_blob_push(blob, val, size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return 0;
+		}
+		val += ULP_BITS_2_BYTE(size);
+		write_size -= size;
+	}
+	return datalen;
+}
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen)
+{
+	if (datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+		return 0;
+	}
+
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return NULL; /* failure */
+	}
+	*datalen = blob->write_idx;
+	return blob->data;
+}
+
+/*
+ * Set the encap swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	blob->encap_swap_idx = blob->write_idx;
+}
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob)
+{
+	uint32_t		i, idx = 0, end_idx = 0;
+	uint8_t		temp_val_1, temp_val_2;
+
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
+
+	while (idx <= end_idx) {
+		for (i = 0; i < 4; i = i + 2) {
+			temp_val_1 = blob->data[idx + i];
+			temp_val_2 = blob->data[idx + i + 1];
+			blob->data[idx + i] = blob->data[idx + 6 - i];
+			blob->data[idx + i + 1] = blob->data[idx + 7 - i];
+			blob->data[idx + 7 - i] = temp_val_2;
+			blob->data[idx + 6 - i] = temp_val_1;
+		}
+		idx += 8;
+	}
+}
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bytes [in] The number of bytes to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bytes)
+{
+	/* validate the arguments */
+	if (!operand || !val) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memcpy(val, operand, bytes);
+	return bytes;
+}
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size)
+{
+	uint16_t	idx = 0;
+
+	/* copy 2 bytes at a time. Write MSB to LSB */
+	while ((idx + sizeof(uint16_t)) <= size) {
+		memcpy(&dst[idx], &src[size - idx - sizeof(uint16_t)],
+		       sizeof(uint16_t));
+		idx += sizeof(uint16_t);
+	}
+}
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ *
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size)
+{
+	return buf[0] == 0 && !memcmp(buf, buf + 1, size - 1);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.h b/drivers/net/bnxt/tf_ulp/ulp_utils.h
new file mode 100644
index 0000000..db88546
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_UTILS_H_
+#define _ULP_UTILS_H_
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are power of 2.
+ */
+#define ULP_BITMAP_SET(bitmap, val)	((bitmap) |= (val))
+#define ULP_BITMAP_RESET(bitmap, val)	((bitmap) &= ~(val))
+#define ULP_BITMAP_ISSET(bitmap, val)	((bitmap) & (val))
+#define ULP_BITSET_CMP(b1, b2)  memcmp(&(b1)->bits, \
+				&(b2)->bits, sizeof((b1)->bits))
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are not power of 2 and
+ * are simple index values.
+ */
+#define ULP_INDEX_BITMAP_SIZE	(sizeof(uint64_t) * 8)
+#define ULP_INDEX_BITMAP_CSET(i)	(1UL << \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))
+
+#define ULP_INDEX_BITMAP_SET(b, i)	((b) |= \
+			(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))))
+
+#define ULP_INDEX_BITMAP_RESET(b, i)	((b) &= \
+			(~(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))))
+
+#define ULP_INDEX_BITMAP_GET(b, i)		(((b) >> \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))) & 1)
+
+#define ULP_DEVICE_PARAMS_INDEX(tid, dev_id)	\
+	(((tid) << BNXT_ULP_LOG2_MAX_NUM_DEV) | (dev_id))
+
+/* Macro to convert bytes to bits */
+#define ULP_BYTE_2_BITS(byte_x)		((byte_x) * 8)
+/* Macro to convert bits to bytes */
+#define ULP_BITS_2_BYTE(bits_x)		(((bits_x) + 7) / 8)
+/* Macro to convert bits to bytes with no round off*/
+#define ULP_BITS_2_BYTE_NR(bits_x)	((bits_x) / 8)
+
+/*
+ * Making the blob statically sized to 128 bytes for now.
+ * The blob must be initialized with ulp_blob_init prior to using.
+ */
+#define BNXT_ULP_FLMP_BLOB_SIZE	(128)
+#define BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS	ULP_BYTE_2_BITS(BNXT_ULP_FLMP_BLOB_SIZE)
+struct ulp_blob {
+	enum bnxt_ulp_byte_order	byte_order;
+	uint16_t			write_idx;
+	uint16_t			bitlen;
+	uint8_t				data[BNXT_ULP_FLMP_BLOB_SIZE];
+	uint16_t			encap_swap_idx;
+};
+
+/*
+ * The data can likely be only 32 bits for now.  Just size check
+ * the data when being written.
+ */
+#define ULP_REGFILE_ENTRY_SIZE	(sizeof(uint32_t))
+struct ulp_regfile_entry {
+	uint64_t	data;
+	uint32_t	size;
+};
+
+struct ulp_regfile {
+	struct ulp_regfile_entry entry[BNXT_ULP_REGFILE_INDEX_LAST];
+};
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile);
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * returns the byte array
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data);
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * returns zero on error
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data);
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, ptr to pushed data otherwise
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen);
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen);
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen);
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen);
+
+/*
+ * Set the 64 bit swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob);
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob);
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bitlen [in] The number of bits to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bitlen);
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size);
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size);
+
+#endif /* _ULP_UTILS_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 18/33] net/bnxt: add support to process action tables
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (16 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 17/33] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 19/33] net/bnxt: add support to process key tables Venkat Duvvuru
                   ` (15 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch processes the action template. Iterates through the list
of action info templates and processes it.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 136 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  25 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 364 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  39 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 243 +++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     | 104 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  48 +++-
 8 files changed, 957 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index dcf1eb4..3a3dad4 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -62,6 +62,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_utils.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mapper.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 3dd39c1..0e7b433 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -7,7 +7,68 @@
 #include "bnxt.h"
 #include "bnxt_tf_common.h"
 #include "ulp_flow_db.h"
+#include "ulp_utils.h"
 #include "ulp_template_struct.h"
+#include "ulp_mapper.h"
+
+#define ULP_FLOW_DB_RES_DIR_BIT		31
+#define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
+#define ULP_FLOW_DB_RES_FUNC_BITS	28
+#define ULP_FLOW_DB_RES_FUNC_MASK	0x70000000
+#define ULP_FLOW_DB_RES_NXT_MASK	0x0FFFFFFF
+
+/* Macro to copy the nxt_resource_idx */
+#define ULP_FLOW_DB_RES_NXT_SET(dst, src)	{(dst) |= ((src) &\
+					 ULP_FLOW_DB_RES_NXT_MASK); }
+#define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
+
+/*
+ * Helper function to allocate the flow table and initialize
+ *  is set.No validation being done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ *
+ * returns 1 on set or 0 if not set.
+ */
+static int32_t
+ulp_flow_db_active_flow_is_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			       uint32_t			idx)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	return ULP_INDEX_BITMAP_GET(flow_tbl->active_flow_tbl[active_index],
+				    idx);
+}
+
+/*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [out] Ptr to resource information
+ * params [in] The input params from the caller
+ * returns none
+ */
+static void
+ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	resource_info->nxt_resource_idx |= ((params->direction <<
+				      ULP_FLOW_DB_RES_DIR_BIT) &
+				     ULP_FLOW_DB_RES_DIR_MASK);
+	resource_info->nxt_resource_idx |= ((params->resource_func <<
+					     ULP_FLOW_DB_RES_FUNC_BITS) &
+					    ULP_FLOW_DB_RES_FUNC_MASK);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
+		resource_info->resource_type = params->resource_type;
+
+	} else {
+		resource_info->resource_em_handle = params->resource_hndl;
+	}
+}
 
 /*
  * Helper function to allocate the flow table and initialize
@@ -185,3 +246,78 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 
 	return 0;
 }
+
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*resource, *fid_resource;
+	uint32_t			idx;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx < 0 || tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	/* check for max resource */
+	if ((flow_tbl->num_flows + 1) >= flow_tbl->tail_index) {
+		BNXT_TF_DBG(ERR, "Flow db has reached max resources\n");
+		return -ENOMEM;
+	}
+	fid_resource = &flow_tbl->flow_resources[fid];
+
+	if (!params->critical_resource) {
+		/* Not the critical_resource so allocate a resource */
+		idx = flow_tbl->flow_tbl_stack[flow_tbl->tail_index];
+		resource = &flow_tbl->flow_resources[idx];
+		flow_tbl->tail_index--;
+
+		/* Update the chain list of resource*/
+		ULP_FLOW_DB_RES_NXT_SET(resource->nxt_resource_idx,
+					fid_resource->nxt_resource_idx);
+		/* update the contents */
+		ulp_flow_db_res_params_to_info(resource, params);
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					idx);
+	} else {
+		/* critical resource. Just update the fid resource */
+		ulp_flow_db_res_params_to_info(fid_resource, params);
+	}
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index a2ee8fa..f6055a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -53,6 +53,15 @@ struct bnxt_ulp_flow_db {
 	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
 };
 
+/* flow db resource params to add resources */
+struct ulp_flow_db_res_params {
+	enum tf_dir			direction;
+	enum bnxt_ulp_resource_func	resource_func;
+	uint64_t			resource_hndl;
+	uint32_t			resource_type;
+	uint32_t			critical_resource;
+};
+
 /*
  * Initialize the flow database. Memory is allocated in this
  * call and assigned to the flow database.
@@ -74,4 +83,20 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
  */
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
new file mode 100644
index 0000000..9cfc382
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_utils.h"
+#include "bnxt_ulp.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
+/*
+ * Get the size of the action property for a given index.
+ *
+ * idx [in] The index for the action property
+ *
+ * returns the size of the action property.
+ */
+static uint32_t
+ulp_mapper_act_prop_size_get(uint32_t idx)
+{
+	if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST)
+		return 0;
+	return ulp_act_prop_map_table[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action
+ *
+ * tbl [in] A single table instance to get the results fields
+ * from num_flds [out] The number of data fields in the returned
+ * array
+ * returns array of data fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
+				 uint32_t *num_rslt_flds,
+				 uint32_t *num_encap_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_rslt_flds || !num_encap_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_rslt_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_act_result_field_list[idx];
+}
+
+static int32_t
+ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
+				struct bnxt_ulp_mapper_result_field_info *fld,
+				struct ulp_blob *blob)
+{
+	uint16_t idx, size_idx;
+	uint8_t	 *val = NULL;
+	uint64_t regval;
+	uint32_t val_size = 0, field_size = 0;
+
+	switch (fld->result_opcode) {
+	case BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT:
+		val = fld->result_operand;
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "Failed to add field\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+		field_size = ulp_mapper_act_prop_size_get(idx);
+		if (fld->field_bit_size < ULP_BYTE_2_BITS(field_size)) {
+			field_size  = field_size -
+			    ((fld->field_bit_size + 7) / 8);
+			val += field_size;
+		}
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+
+		/* get the size index next */
+		if (!ulp_operand_read(&fld->result_operand[sizeof(uint16_t)],
+				      (uint8_t *)&size_idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		size_idx = tfp_be_to_cpu_16(size_idx);
+
+		if (size_idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", size_idx);
+			return -EINVAL;
+		}
+		memcpy(&val_size, &parms->act_prop->act_details[size_idx],
+		       sizeof(uint32_t));
+		val_size = tfp_be_to_cpu_32(val_size);
+		val_size = ULP_BYTE_2_BITS(val_size);
+		ulp_blob_push_encap(blob, val, val_size);
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+
+		idx = tfp_be_to_cpu_16(idx);
+		/* Uninitialized regfile entries return 0 */
+		if (!ulp_regfile_read(parms->regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read oob\n", idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, fld->field_bit_size);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
+ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
+				struct ulp_blob *blob)
+{
+	struct ulp_flow_db_res_params		fid_parms;
+	struct tf_alloc_tbl_entry_parms		alloc_parms = { 0 };
+	struct tf_free_tbl_entry_parms		free_parms = { 0 };
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls = parms->atbls;
+	int32_t					rc = 0;
+	int32_t trc;
+	uint64_t				idx;
+
+	/* Set the allocation parameters for the table*/
+	alloc_parms.dir = atbls->direction;
+	alloc_parms.type = atbls->table_type;
+	alloc_parms.search_enable = atbls->srch_b4_alloc;
+	alloc_parms.result = ulp_blob_data_get(blob,
+					       &alloc_parms.result_sz_in_bytes);
+	if (!alloc_parms.result) {
+		BNXT_TF_DBG(ERR, "blob is not populated\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_tbl_entry(parms->tfp, &alloc_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "table type= [%d] dir = [%s] alloc failed\n",
+			    alloc_parms.type,
+			    (alloc_parms.dir == TF_DIR_RX) ? "RX" : "TX");
+		return rc;
+	}
+
+	/* Need to calculate the idx for the result record */
+	/*
+	 * TBD: Need to get the stride from tflib instead of having to
+	 * understand the constrution of the pointer
+	 */
+	uint64_t tmpidx = alloc_parms.idx;
+
+	if (atbls->table_type == TF_TBL_TYPE_EXT)
+		tmpidx = (alloc_parms.idx * TF_ACTION_RECORD_SZ) >> 4;
+	else
+		tmpidx = alloc_parms.idx;
+
+	idx = tfp_cpu_to_be_64(tmpidx);
+
+	/* Store the allocated index for future use in the regfile */
+	rc = ulp_regfile_write(parms->regfile, atbls->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "regfile[%d] write failed\n",
+			    atbls->regfile_wr_idx);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/*
+	 * The set_tbl_entry API if search is not enabled or searched entry
+	 * is not found.
+	 */
+	if (!atbls->srch_b4_alloc || !alloc_parms.hit) {
+		struct tf_set_tbl_entry_parms set_parm = { 0 };
+		uint16_t	length;
+
+		set_parm.dir	= atbls->direction;
+		set_parm.type	= atbls->table_type;
+		set_parm.idx	= alloc_parms.idx;
+		set_parm.data	= ulp_blob_data_get(blob, &length);
+		set_parm.data_sz_in_bytes = length / 8;
+
+		if (set_parm.type == TF_TBL_TYPE_EXT)
+			bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+							&set_parm.tbl_scope_id);
+		else
+			set_parm.tbl_scope_id = 0;
+
+		/* set the table entry */
+		rc = tf_set_tbl_entry(parms->tfp, &set_parm);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "table[%d][%s][%d] set failed\n",
+				    set_parm.type,
+				    (set_parm.dir == TF_DIR_RX) ? "RX" : "TX",
+				    set_parm.idx);
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= atbls->direction;
+	fid_parms.resource_func		= atbls->resource_func;
+	fid_parms.resource_type		= atbls->table_type;
+	fid_parms.resource_hndl		= alloc_parms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	return 0;
+error:
+
+	free_parms.dir	= alloc_parms.dir;
+	free_parms.type	= alloc_parms.type;
+	free_parms.idx	= alloc_parms.idx;
+
+	trc = tf_free_tbl_entry(parms->tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free table entry on failure\n");
+
+	return rc;
+}
+
+/*
+ * Function to process the action Info. Iterate through the list
+ * action info templates and process it.
+ */
+static int32_t
+ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
+			       struct bnxt_ulp_mapper_act_tbl_info *tbl)
+{
+	struct ulp_blob					blob;
+	struct bnxt_ulp_mapper_result_field_info	*flds, *fld;
+	uint32_t					num_flds = 0;
+	uint32_t					encap_flds = 0;
+	uint32_t					i;
+	int32_t						rc;
+	uint16_t					bit_size;
+
+	if (!tbl || !parms->act_prop || !parms->act_bitmap || !parms->regfile)
+		return -EINVAL;
+
+	/* use the max size if encap is enabled */
+	if (tbl->encap_num_fields)
+		bit_size = BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS;
+	else
+		bit_size = tbl->result_bit_size;
+	if (!ulp_blob_init(&blob, bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "action blob init failed\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_act_result_fields_get(tbl, &num_flds, &encap_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < (num_flds + encap_flds); i++) {
+		fld = &flds[i];
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &blob);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Action field failed\n");
+			return rc;
+		}
+		/* set the swap index if 64 bit swap is enabled */
+		if (parms->encap_byte_swap && encap_flds) {
+			if ((i + 1) == num_flds)
+				ulp_blob_encap_swap_idx_set(&blob);
+			/* if 64 bit swap is enabled perform the 64bit swap */
+			if ((i + 1) == (num_flds + encap_flds))
+				ulp_blob_perform_encap_swap(&blob);
+		}
+	}
+
+	rc = ulp_mapper_action_alloc_and_set(parms, &blob);
+	return rc;
+}
+
+/*
+ * Function to process the action template. Iterate through the list
+ * action info templates and process it.
+ */
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms->atbls || !parms->num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for template[%d][%d].\n",
+			    parms->dev_id, parms->act_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_atbls; i++) {
+		rc = ulp_mapper_action_info_process(parms, &parms->atbls[i]);
+		if (rc)
+			return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
new file mode 100644
index 0000000..adbcec2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MAPPER_H_
+#define _ULP_MAPPER_H_
+
+#include <tf_core.h>
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_ulp.h"
+#include "ulp_utils.h"
+
+/* Internal Structure for passing the arguments around */
+struct bnxt_ulp_mapper_parms {
+	uint32_t				dev_id;
+	enum bnxt_ulp_byte_order		order;
+	uint32_t				act_tid;
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls;
+	uint32_t				num_atbls;
+	uint32_t				class_tid;
+	struct bnxt_ulp_mapper_class_tbl_info	*ctbls;
+	uint32_t				num_ctbls;
+	struct ulp_rte_act_prop			*act_prop;
+	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_field		*hdr_field;
+	struct ulp_regfile			*regfile;
+	struct tf				*tfp;
+	struct bnxt_ulp_context			*ulp_ctx;
+	uint8_t					encap_byte_swap;
+	uint32_t				fid;
+	enum bnxt_ulp_flow_db_tables		tbl_idx;
+};
+
+#endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index bc0ffd3..75bf967 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,88 @@
 #include "ulp_template_db.h"
 #include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
+uint32_t ulp_act_prop_map_table[] = {
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_VNIC] =
+		BNXT_ULP_ACT_PROP_SZ_VNIC,
+	[BNXT_ULP_ACT_PROP_IDX_VPORT] =
+		BNXT_ULP_ACT_PROP_SZ_VPORT,
+	[BNXT_ULP_ACT_PROP_IDX_MARK] =
+		BNXT_ULP_ACT_PROP_SZ_MARK,
+	[BNXT_ULP_ACT_PROP_IDX_COUNT] =
+		BNXT_ULP_ACT_PROP_SZ_COUNT,
+	[BNXT_ULP_ACT_PROP_IDX_METER] =
+		BNXT_ULP_ACT_PROP_SZ_METER,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN,
+	[BNXT_ULP_ACT_PROP_IDX_LAST] =
+		BNXT_ULP_ACT_PROP_SZ_LAST
+};
 
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
@@ -26,3 +108,164 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {BNXT_ULP_SYM_DECAP_FUNC_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP,
+	.result_operand = {(BNXT_ULP_ACT_PROP_IDX_VNIC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 1eed828..e52cc3f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -39,9 +39,113 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_LAST
 };
 
+enum bnxt_ulp_resource_func {
+	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0,
+	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 1,
+	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 2,
+	BNXT_ULP_RESOURCE_FUNC_IDENTIFIER = 3,
+	BNXT_ULP_RESOURCE_FUNC_HW_FID = 4,
+	BNXT_ULP_RESOURCE_FUNC_LAST = 5
+};
+
+enum bnxt_ulp_result_opc {
+	BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP = 1,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ = 2,
+	BNXT_ULP_RESULT_OPC_SET_TO_REGFILE = 3,
+	BNXT_ULP_RESULT_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_act_prop_sz {
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_VNIC = 4,
+	BNXT_ULP_ACT_PROP_SZ_VPORT = 4,
+	BNXT_ULP_ACT_PROP_SZ_MARK = 4,
+	BNXT_ULP_ACT_PROP_SZ_COUNT = 4,
+	BNXT_ULP_ACT_PROP_SZ_METER = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC = 8,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST = 8,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7 = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG = 8,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP = 32,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN = 32,
+	BNXT_ULP_ACT_PROP_SZ_LAST = 4
+};
+
+enum bnxt_ulp_act_prop_idx {
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ = 0,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ = 8,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE = 12,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM = 16,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE = 20,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM = 24,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM = 28,
+	BNXT_ULP_ACT_PROP_IDX_VNIC = 32,
+	BNXT_ULP_ACT_PROP_IDX_VPORT = 36,
+	BNXT_ULP_ACT_PROP_IDX_MARK = 40,
+	BNXT_ULP_ACT_PROP_IDX_COUNT = 44,
+	BNXT_ULP_ACT_PROP_IDX_METER = 48,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC = 52,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST = 60,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN = 68,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP = 72,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID = 76,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC = 80,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST = 84,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 88,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 104,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 120,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 124,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 128,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 132,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 136,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 140,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 144,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 148,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 152,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 156,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 160,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 166,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 172,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 180,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 212,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 228,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 232,
+	BNXT_ULP_ACT_PROP_IDX_LAST = 264
+};
+
 #endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4b9d0b2..2b0a3d7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,7 +17,15 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
-/* Device specific parameters. */
+/*
+ * structure to hold the action property details
+ * It is a array of 128 bytes
+ */
+struct ulp_rte_act_prop {
+	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
+};
+
+/* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
 	uint32_t			global_fid_enable;
@@ -31,10 +39,44 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_act_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	enum tf_tbl_type table_type;
+	uint8_t		direction;
+	uint8_t		srch_b4_alloc;
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	encap_num_fields;
+	uint16_t	result_num_fields;
+
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+struct bnxt_ulp_mapper_result_field_info {
+	uint8_t				name[64];
+	enum bnxt_ulp_result_opc	result_opcode;
+	uint16_t			field_bit_size;
+	uint8_t				result_operand[16];
+};
+
 /*
- * The ulp_device_params is indexed by the dev_id.
- * This table maintains the device specific parameters.
+ * The ulp_device_params is indexed by the dev_id
+ * This table maintains the device specific parameters
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
+ * record.  It uses the same structure as the result list, but is only used for
+ * actions.
+ */
+extern
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * properties.
+ */
+extern uint32_t ulp_act_prop_map_table[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 19/33] net/bnxt: add support to process key tables
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (17 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 18/33] net/bnxt: add support to process action tables Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 20/33] net/bnxt: add support to free key and action tables Venkat Duvvuru
                   ` (14 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch creates the classifier table entries for a flow.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 773 +++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  80 ++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  18 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 896 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       | 142 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h | 133 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  93 ++-
 7 files changed, 2127 insertions(+), 8 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 9cfc382..a041394 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -19,6 +19,9 @@
 int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
 
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -37,10 +40,65 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 /*
  * Get the list of result fields that implement the flow action
  *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the key fields from
+ *
+ * num_flds [out] The number of key fields in the returned array
+ *
+ * returns array of Key fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_class_key_field_info *
+ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			  uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->key_start_idx;
+	*num_flds	= tbl->key_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_key_field_list[idx];
+}
+
+/*
+ * Get the list of data fields that implement the flow.
+ *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the data fields from
+ *
+ * num_flds [out] The number of data fields in the returned array.
+ *
+ * Returns array of data fields, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			     uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_flds	= tbl->result_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_result_field_list[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action.
+ *
  * tbl [in] A single table instance to get the results fields
  * from num_flds [out] The number of data fields in the returned
- * array
- * returns array of data fields, or NULL on error
+ * array.
+ *
+ * Returns array of data fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
@@ -60,6 +118,106 @@ ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
 	return &ulp_act_result_field_list[idx];
 }
 
+/*
+ * Get the list of ident fields that implement the flow
+ *
+ * tbl [in] A single table instance to get the ident fields from
+ *
+ * num_flds [out] The number of ident fields in the returned array
+ *
+ * returns array of ident fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_ident_info *
+ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			    uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx = tbl->ident_start_idx;
+	*num_flds = tbl->ident_nums;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_ident_list[idx];
+}
+
+static int32_t
+ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
+			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			 struct bnxt_ulp_mapper_ident_info *ident)
+{
+	struct ulp_flow_db_res_params	fid_parms;
+	uint64_t id = 0;
+	int32_t idx;
+	struct tf_alloc_identifier_parms iparms = { 0 };
+	struct tf_free_identifier_parms free_parms = { 0 };
+	struct tf *tfp;
+	int rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get tf pointer\n");
+		return -EINVAL;
+	}
+
+	idx = ident->regfile_wr_idx;
+
+	iparms.ident_type = ident->ident_type;
+	iparms.dir = tbl->direction;
+
+	rc = tf_alloc_identifier(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc ident %s:%d failed.\n",
+			    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iparms.ident_type);
+		return rc;
+	}
+
+	id = (uint64_t)tfp_cpu_to_be_64(iparms.id);
+	if (!ulp_regfile_write(parms->regfile, idx, id)) {
+		BNXT_TF_DBG(ERR, "Regfile[%d] write failed.\n", idx);
+		rc = -EINVAL;
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func	= ident->resource_func;
+	fid_parms.resource_type	= ident->ident_type;
+	fid_parms.resource_hndl	= iparms.id;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+
+error:
+	/* Need to free the identifier */
+	free_parms.dir		= tbl->direction;
+	free_parms.ident_type	= ident->ident_type;
+	free_parms.id		= iparms.id;
+
+	(void)tf_free_identifier(tfp, &free_parms);
+
+	BNXT_TF_DBG(ERR, "Ident process failed for %s:%s\n",
+		    ident->name,
+		    (tbl->direction == TF_DIR_RX) ? "RX" : "TX");
+	return rc;
+}
+
 static int32_t
 ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 				struct bnxt_ulp_mapper_result_field_info *fld,
@@ -163,6 +321,89 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 
 /* Function to alloc action record and set the table. */
 static int32_t
+ulp_mapper_keymask_field_process(struct bnxt_ulp_mapper_parms *parms,
+				 struct bnxt_ulp_mapper_class_key_field_info *f,
+				 struct ulp_blob *blob,
+				 uint8_t is_key)
+{
+	uint64_t regval;
+	uint16_t idx, bitlen;
+	uint32_t opcode;
+	uint8_t *operand;
+	struct ulp_regfile *regfile = parms->regfile;
+	uint8_t *val = NULL;
+	struct bnxt_ulp_mapper_class_key_field_info *fld = f;
+
+	if (is_key) {
+		operand = fld->spec_operand;
+		opcode	= fld->spec_opcode;
+	} else {
+		operand = fld->mask_operand;
+		opcode	= fld->mask_opcode;
+	}
+
+	bitlen = fld->field_bit_size;
+
+	switch (opcode) {
+	case BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT:
+		val = operand;
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_ADD_PAD:
+		if (!ulp_blob_pad_push(blob, bitlen)) {
+			BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+			return -EINVAL;
+		}
+
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+		if (is_key)
+			val = parms->hdr_field[idx].spec;
+		else
+			val = parms->hdr_field[idx].mask;
+
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (!ulp_regfile_read(regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read failed.\n",
+				    idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, bitlen);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
 ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
 				struct ulp_blob *blob)
 {
@@ -338,6 +579,489 @@ ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			    struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct ulp_blob key, mask, data;
+	uint32_t i, num_kflds;
+	struct tf *tfp;
+	int32_t rc, trc;
+	struct tf_alloc_tcam_entry_parms aparms		= { 0 };
+	struct tf_set_tcam_entry_parms sparms		= { 0 };
+	struct ulp_flow_db_res_params	fid_parms	= { 0 };
+	struct tf_free_tcam_entry_parms free_parms	= { 0 };
+	uint32_t hit = 0;
+	uint16_t tmplen = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get truflow pointer\n");
+		return -EINVAL;
+	}
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	if (!ulp_blob_init(&key, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&mask, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key/mask */
+	/*
+	 * NOTE: The WC table will require some kind of flag to handle the
+	 * mode bits within the key/mask
+	 */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+
+		/* Setup the mask */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &mask, 0);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Mask field set failed.\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.tcam_tbl_type	= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.key_sz_in_bits	= tbl->key_bit_size;
+	aparms.key		= ulp_blob_data_get(&key, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Key len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Mask len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.priority		= tbl->priority;
+
+	/*
+	 * All failures after this succeeds require the entry to be freed.
+	 * cannot return directly on failure, but needs to goto error
+	 */
+	rc = tf_alloc_tcam_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "tcam alloc failed rc=%d.\n", rc);
+		return rc;
+	}
+
+	hit = aparms.hit;
+
+	/* Build the result */
+	if (!tbl->srch_b4_alloc || !hit) {
+		struct bnxt_ulp_mapper_result_field_info *dflds;
+		struct bnxt_ulp_mapper_ident_info *idents;
+		uint32_t num_dflds, num_idents;
+
+		/* Alloc identifiers */
+		idents = ulp_mapper_ident_fields_get(tbl, &num_idents);
+
+		for (i = 0; i < num_idents; i++) {
+			rc = ulp_mapper_ident_process(parms, tbl, &idents[i]);
+
+			/* Already logged the error, just return */
+			if (rc)
+				goto error;
+		}
+
+		/* Create the result data blob */
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+		if (!dflds || !num_dflds) {
+			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+			rc = -EINVAL;
+			goto error;
+		}
+
+		for (i = 0; i < num_dflds; i++) {
+			rc = ulp_mapper_result_field_process(parms,
+							     &dflds[i],
+							     &data);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Failed to set data fields\n");
+				goto error;
+			}
+		}
+
+		sparms.dir		= aparms.dir;
+		sparms.tcam_tbl_type	= aparms.tcam_tbl_type;
+		sparms.idx		= aparms.idx;
+		/* Already verified the key/mask lengths */
+		sparms.key		= ulp_blob_data_get(&key, &tmplen);
+		sparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+		sparms.key_sz_in_bits	= tbl->key_bit_size;
+		sparms.result		= ulp_blob_data_get(&data, &tmplen);
+
+		if (tbl->result_bit_size != tmplen) {
+			BNXT_TF_DBG(ERR, "Result len (%d) != Expected (%d)\n",
+				    tmplen, tbl->result_bit_size);
+			rc = -EINVAL;
+			goto error;
+		}
+		sparms.result_sz_in_bits = tbl->result_bit_size;
+
+		rc = tf_set_tcam_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "tcam[%d][%s][%d] write failed.\n",
+				    sparms.tcam_tbl_type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx);
+			goto error;
+		}
+	} else {
+		BNXT_TF_DBG(ERR, "Not supporting search before alloc now\n");
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	fid_parms.direction = tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.critical_resource = tbl->critical_resource;
+	fid_parms.resource_hndl	= aparms.idx;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir			= tbl->direction;
+	free_parms.tcam_tbl_type	= tbl->table_type;
+	free_parms.idx			= aparms.idx;
+	trc = tf_free_tcam_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tcam[%d][%d][%d] on failure\n",
+			    tbl->table_type, tbl->direction, aparms.idx);
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct bnxt_ulp_mapper_result_field_info *dflds;
+	struct ulp_blob key, data;
+	uint32_t i, num_kflds, num_dflds;
+	uint16_t tmplen;
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	struct ulp_rte_act_prop	 *a_prop = parms->act_prop;
+	struct ulp_flow_db_res_params	fid_parms = { 0 };
+	struct tf_insert_em_entry_parms iparms = { 0 };
+	struct tf_delete_em_entry_parms free_parms = { 0 };
+	int32_t	trc;
+	int32_t rc = 0;
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	/* Initialize the key/result blobs */
+	if (!ulp_blob_init(&key, tbl->blob_key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * TBD: Normally should process identifiers in case of using recycle or
+	 * loopback.  Not supporting recycle for now.
+	 */
+
+	/* Create the result data blob */
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+	if (!dflds || !num_dflds) {
+		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_dflds; i++) {
+		struct bnxt_ulp_mapper_result_field_info *fld;
+
+		fld = &dflds[i];
+
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to set data fields.\n");
+			return rc;
+		}
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+					     &iparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	/*
+	 * NOTE: the actual blob size will differ from the size in the tbl
+	 * entry due to the padding.
+	 */
+	iparms.dup_check		= 0;
+	iparms.dir			= tbl->direction;
+	iparms.mem			= tbl->mem;
+	iparms.key			= ulp_blob_data_get(&key, &tmplen);
+	iparms.key_sz_in_bits		= tbl->key_bit_size;
+	iparms.em_record		= ulp_blob_data_get(&data, &tmplen);
+	iparms.em_record_sz_in_bits	= tbl->result_bit_size;
+
+	rc = tf_insert_em_entry(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to insert em entry rc=%d.\n", rc);
+		return rc;
+	}
+
+	if (tbl->mark_enable &&
+	    ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		uint32_t val, mark, gfid, flag;
+		/* TBD: Need to determine if GFID is enabled globally */
+		if (sizeof(val) != BNXT_ULP_ACT_PROP_SZ_MARK) {
+			BNXT_TF_DBG(ERR, "Mark size (%d) != expected (%ld)\n",
+				    BNXT_ULP_ACT_PROP_SZ_MARK, sizeof(val));
+			rc = -EINVAL;
+			goto error;
+		}
+
+		memcpy(&val,
+		       &a_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       sizeof(val));
+
+		mark = tfp_be_to_cpu_32(val);
+
+		TF_GET_GFID_FROM_FLOW_ID(iparms.flow_id, gfid);
+		TF_GET_FLAG_FROM_FLOW_ID(iparms.flow_id, flag);
+
+		rc = ulp_mark_db_mark_add(parms->ulp_ctx,
+					  (flag == TF_GFID_TABLE_EXTERNAL),
+					  gfid,
+					  mark);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to add mark to flow\n");
+			goto error;
+		}
+
+		/*
+		 * Link the mark resource to the flow in the flow db
+		 * The mark is never the critical resource, so it is 0.
+		 */
+		memset(&fid_parms, 0, sizeof(fid_parms));
+		fid_parms.direction	= tbl->direction;
+		fid_parms.resource_func	= BNXT_ULP_RESOURCE_FUNC_HW_FID;
+		fid_parms.resource_type	= tbl->table_type;
+		fid_parms.resource_hndl	= iparms.flow_id;
+		fid_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+					      parms->tbl_idx,
+					      parms->fid,
+					      &fid_parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+				    rc);
+			/* Need to free the identifier, so goto error */
+			goto error;
+		}
+	}
+
+	/* Link the EM resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func		= tbl->resource_func;
+	fid_parms.resource_type		= tbl->table_type;
+	fid_parms.critical_resource	= tbl->critical_resource;
+	fid_parms.resource_hndl		= iparms.flow_handle;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir		= iparms.dir;
+	free_parms.mem		= iparms.mem;
+	free_parms.tbl_scope_id	= iparms.tbl_scope_id;
+	free_parms.flow_handle	= iparms.flow_handle;
+
+	trc = tf_delete_em_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to delete EM entry on failed add\n");
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			     struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_flow_db_res_params	fid_parms;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0, trc = 0;
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_set_tbl_entry_parms	sparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+
+	if (!ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_flds; i++) {
+		rc = ulp_mapper_result_field_process(parms,
+						     &flds[i],
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.type		= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.result		= ulp_blob_data_get(&data, &tmplen);
+	aparms.result_sz_in_bytes = ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+
+	/* All failures after the alloc succeeds require a free */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc table[%d][%s] failed rc=%d\n",
+			    tbl->table_type,
+			    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+			    rc);
+		return rc;
+	}
+
+	/* Always storing values in Regfile in BE */
+	idx = tfp_cpu_to_be_64(aparms.idx);
+	rc = ulp_regfile_write(parms->regfile, tbl->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+			    tbl->regfile_wr_idx);
+		goto error;
+	}
+
+	if (!tbl->srch_b4_alloc) {
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->table_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes =
+			ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+		sparms.idx		= aparms.idx;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+				    tbl->table_type,
+				    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction	= tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.resource_hndl	= aparms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		goto error;
+	}
+
+	return rc;
+error:
+	/*
+	 * Free the allocated resource since we failed to either
+	 * write to the entry or link the flow
+	 */
+	free_parms.dir	= tbl->direction;
+	free_parms.type	= tbl->table_type;
+	free_parms.idx	= aparms.idx;
+
+	trc = tf_free_tbl_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tbl entry on failure\n");
+
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -362,3 +1086,48 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+/* Create the classifier table entries for a flow. */
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms)
+		return -EINVAL;
+
+	if (!parms->ctbls || !parms->num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for template[%d][%d].\n",
+			    parms->dev_id, parms->class_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_ctbls; i++) {
+		struct bnxt_ulp_mapper_class_tbl_info *tbl = &parms->ctbls[i];
+
+		switch (tbl->resource_func) {
+		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+			rc = ulp_mapper_em_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_index_tbl_process(parms, tbl);
+			break;
+		default:
+			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
+				    tbl->resource_func);
+			return -EINVAL;
+		}
+
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+				    tbl->resource_func);
+			return rc;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 9e4307e..837064e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -6,14 +6,71 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_log.h>
+#include "bnxt.h"
 #include "bnxt_ulp.h"
 #include "tf_ext_flow_handle.h"
 #include "ulp_mark_mgr.h"
 #include "bnxt_tf_common.h"
-#include "../bnxt.h"
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+static inline uint32_t
+ulp_mark_db_idx_get(bool is_gfid, uint32_t fid, struct bnxt_ulp_mark_tbl *mtbl)
+{
+	uint32_t idx = 0, hashtype = 0;
+
+	if (is_gfid) {
+		TF_GET_HASH_TYPE_FROM_GFID(fid, hashtype);
+		TF_GET_HASH_INDEX_FROM_GFID(fid, idx);
+
+		/* Need to truncate anything beyond supported flows */
+		idx &= mtbl->gfid_mask;
+
+		if (hashtype)
+			idx |= mtbl->gfid_type_bit;
+	} else {
+		idx = fid;
+	}
+
+	return idx;
+}
+
+static int32_t
+ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t mark)
+{
+	struct		bnxt_ulp_mark_tbl *mtbl;
+	uint32_t	idx = 0;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark DB\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+
+		mtbl->gfid_tbl[idx].mark_id = mark;
+		mtbl->gfid_tbl[idx].valid = true;
+	} else {
+		/* For the LFID, the FID is used as the index */
+		mtbl->lfid_tbl[fid].mark_id = mark;
+		mtbl->lfid_tbl[fid].valid = true;
+	}
+
+	return 0;
+}
+
 /*
  * Allocate and Initialize all Mark Manager resources for this ulp context.
  *
@@ -117,3 +174,24 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 
 	return 0;
 }
+
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 5948683..18abea4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -54,4 +54,22 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 75bf967..ba06493 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -108,6 +108,902 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_class_tbl_info ulp_class_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 0,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.ident_start_idx = 0,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 13,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.ident_start_idx = 1,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.table_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 55,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 197,
+	.key_num_fields = 11,
+	.result_start_idx = 21,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_YES,
+	.critical_resource = 1,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_DMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_DMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L3_HDR_TYPE_IPV4,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L2_HDR_TYPE_DIX,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_PKT_TYPE_L2,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_ADD_PAD,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_SMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_REGFILE,
+	.spec_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00fd >> 8) & 0xff,
+		0x00fd & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x15, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 33,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00c5 >> 8) & 0xff,
+		0x00c5 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_L2_CTXT,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0,
+	.ident_bit_size = 10,
+	.ident_bit_pos = 54
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_EM_PROF,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0,
+	.ident_bit_size = 8,
+	.ident_bit_pos = 2
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index e52cc3f..733836a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,37 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 
+enum bnxt_ulp_action_bit {
+	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
+	BNXT_ULP_ACTION_BIT_DROP             = 0x0000000000000002,
+	BNXT_ULP_ACTION_BIT_COUNT            = 0x0000000000000004,
+	BNXT_ULP_ACTION_BIT_RSS              = 0x0000000000000008,
+	BNXT_ULP_ACTION_BIT_METER            = 0x0000000000000010,
+	BNXT_ULP_ACTION_BIT_VNIC             = 0x0000000000000020,
+	BNXT_ULP_ACTION_BIT_VPORT            = 0x0000000000000040,
+	BNXT_ULP_ACTION_BIT_VXLAN_DECAP      = 0x0000000000000080,
+	BNXT_ULP_ACTION_BIT_NVGRE_DECAP      = 0x0000000000000100,
+	BNXT_ULP_ACTION_BIT_OF_POP_MPLS      = 0x0000000000000200,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_MPLS     = 0x0000000000000400,
+	BNXT_ULP_ACTION_BIT_MAC_SWAP         = 0x0000000000000800,
+	BNXT_ULP_ACTION_BIT_SET_MAC_SRC      = 0x0000000000001000,
+	BNXT_ULP_ACTION_BIT_SET_MAC_DST      = 0x0000000000002000,
+	BNXT_ULP_ACTION_BIT_OF_POP_VLAN      = 0x0000000000004000,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_VLAN     = 0x0000000000008000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_PCP  = 0x0000000000010000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_VID  = 0x0000000000020000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_SRC     = 0x0000000000040000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_DST     = 0x0000000000080000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_SRC     = 0x0000000000100000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_DST     = 0x0000000000200000,
+	BNXT_ULP_ACTION_BIT_DEC_TTL          = 0x0000000000400000,
+	BNXT_ULP_ACTION_BIT_SET_TP_SRC       = 0x0000000000800000,
+	BNXT_ULP_ACTION_BIT_SET_TP_DST       = 0x0000000001000000,
+	BNXT_ULP_ACTION_BIT_VXLAN_ENCAP      = 0x0000000002000000,
+	BNXT_ULP_ACTION_BIT_NVGRE_ENCAP      = 0x0000000004000000,
+	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -35,8 +66,48 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_mark_enable {
+	BNXT_ULP_MARK_ENABLE_NO = 0,
+	BNXT_ULP_MARK_ENABLE_YES = 1,
+	BNXT_ULP_MARK_ENABLE_LAST = 2
+};
+
+enum bnxt_ulp_mask_opc {
+	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_MASK_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_MASK_OPC_ADD_PAD = 3,
+	BNXT_ULP_MASK_OPC_LAST = 4
+};
+
+enum bnxt_ulp_priority {
+	BNXT_ULP_PRIORITY_LEVEL_0 = 0,
+	BNXT_ULP_PRIORITY_LEVEL_1 = 1,
+	BNXT_ULP_PRIORITY_LEVEL_2 = 2,
+	BNXT_ULP_PRIORITY_LEVEL_3 = 3,
+	BNXT_ULP_PRIORITY_LEVEL_4 = 4,
+	BNXT_ULP_PRIORITY_LEVEL_5 = 5,
+	BNXT_ULP_PRIORITY_LEVEL_6 = 6,
+	BNXT_ULP_PRIORITY_LEVEL_7 = 7,
+	BNXT_ULP_PRIORITY_NOT_USED = 8,
+	BNXT_ULP_PRIORITY_LAST = 9
+};
+
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_LAST
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 0,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 1,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN = 8,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 9,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 10,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 11,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 12,
+	BNXT_ULP_REGFILE_INDEX_LAST = 13
 };
 
 enum bnxt_ulp_resource_func {
@@ -56,9 +127,78 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_spec_opc {
+	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_SPEC_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_SPEC_OPC_ADD_PAD = 3,
+	BNXT_ULP_SPEC_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_NO = 0,
+	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
new file mode 100644
index 0000000..1f58ace
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved_
+ */
+
+/* date: Mon Mar  9 02:37:53 2020
+ * version: 0_0
+ */
+
+#ifndef _ULP_HDR_FIELD_ENUMS_H_
+#define _ULP_HDR_FIELD_ENUMS_H_
+
+/* class_template_id = 0: ingress flow */
+enum bnxt_ulp_hf0 {
+	BNXT_ULP_HF0_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF0_O_VTAG_NUM = 1,
+	BNXT_ULP_HF0_I_VTAG_NUM = 2,
+	BNXT_ULP_HF0_SVIF_INDEX = 3,
+	BNXT_ULP_HF0_O_ETH_DMAC = 4,
+	BNXT_ULP_HF0_O_ETH_SMAC = 5,
+	BNXT_ULP_HF0_O_ETH_TYPE = 6,
+	BNXT_ULP_HF0_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF0_OO_VLAN_VID = 8,
+	BNXT_ULP_HF0_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF0_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF0_OI_VLAN_VID = 11,
+	BNXT_ULP_HF0_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF0_O_IPV4_VER = 13,
+	BNXT_ULP_HF0_O_IPV4_TOS = 14,
+	BNXT_ULP_HF0_O_IPV4_LEN = 15,
+	BNXT_ULP_HF0_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF0_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF0_O_IPV4_TTL = 18,
+	BNXT_ULP_HF0_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF0_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF0_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF0_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF0_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF0_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF0_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF0_O_UDP_CSUM = 26,
+	BNXT_ULP_HF0_T_VXLAN_FLAGS = 27,
+	BNXT_ULP_HF0_T_VXLAN_RSVD0 = 28,
+	BNXT_ULP_HF0_T_VXLAN_VNI = 29,
+	BNXT_ULP_HF0_T_VXLAN_RSVD1 = 30,
+	BNXT_ULP_HF0_I_ETH_DMAC = 31,
+	BNXT_ULP_HF0_I_ETH_SMAC = 32,
+	BNXT_ULP_HF0_I_ETH_TYPE = 33,
+	BNXT_ULP_HF0_IO_VLAN_CFI_PRI = 34,
+	BNXT_ULP_HF0_IO_VLAN_VID = 35,
+	BNXT_ULP_HF0_IO_VLAN_TYPE = 36,
+	BNXT_ULP_HF0_II_VLAN_CFI_PRI = 37,
+	BNXT_ULP_HF0_II_VLAN_VID = 38,
+	BNXT_ULP_HF0_II_VLAN_TYPE = 39,
+	BNXT_ULP_HF0_I_IPV4_VER = 40,
+	BNXT_ULP_HF0_I_IPV4_TOS = 41,
+	BNXT_ULP_HF0_I_IPV4_LEN = 42,
+	BNXT_ULP_HF0_I_IPV4_FRAG_ID = 43,
+	BNXT_ULP_HF0_I_IPV4_FRAG_OFF = 44,
+	BNXT_ULP_HF0_I_IPV4_TTL = 45,
+	BNXT_ULP_HF0_I_IPV4_NEXT_PID = 46,
+	BNXT_ULP_HF0_I_IPV4_CSUM = 47,
+	BNXT_ULP_HF0_I_IPV4_SRC_ADDR = 48,
+	BNXT_ULP_HF0_I_IPV4_DST_ADDR = 49,
+	BNXT_ULP_HF0_I_UDP_SRC_PORT = 50,
+	BNXT_ULP_HF0_I_UDP_DST_PORT = 51,
+	BNXT_ULP_HF0_I_UDP_LENGTH = 52,
+	BNXT_ULP_HF0_I_UDP_CSUM = 53
+};
+
+/* class_template_id = 1: egress flow */
+enum bnxt_ulp_hf1 {
+	BNXT_ULP_HF1_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF1_O_VTAG_NUM = 1,
+	BNXT_ULP_HF1_I_VTAG_NUM = 2,
+	BNXT_ULP_HF1_SVIF_INDEX = 3,
+	BNXT_ULP_HF1_O_ETH_DMAC = 4,
+	BNXT_ULP_HF1_O_ETH_SMAC = 5,
+	BNXT_ULP_HF1_O_ETH_TYPE = 6,
+	BNXT_ULP_HF1_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF1_OO_VLAN_VID = 8,
+	BNXT_ULP_HF1_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF1_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF1_OI_VLAN_VID = 11,
+	BNXT_ULP_HF1_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF1_O_IPV4_VER = 13,
+	BNXT_ULP_HF1_O_IPV4_TOS = 14,
+	BNXT_ULP_HF1_O_IPV4_LEN = 15,
+	BNXT_ULP_HF1_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF1_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF1_O_IPV4_TTL = 18,
+	BNXT_ULP_HF1_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF1_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF1_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF1_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF1_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF1_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF1_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF1_O_UDP_CSUM = 26
+};
+
+/* class_template_id = 2: ingress flow */
+enum bnxt_ulp_hf2 {
+	BNXT_ULP_HF2_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF2_O_VTAG_NUM = 1,
+	BNXT_ULP_HF2_I_VTAG_NUM = 2,
+	BNXT_ULP_HF2_SVIF_INDEX = 3,
+	BNXT_ULP_HF2_O_ETH_DMAC = 4,
+	BNXT_ULP_HF2_O_ETH_SMAC = 5,
+	BNXT_ULP_HF2_O_ETH_TYPE = 6,
+	BNXT_ULP_HF2_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF2_OO_VLAN_VID = 8,
+	BNXT_ULP_HF2_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF2_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF2_OI_VLAN_VID = 11,
+	BNXT_ULP_HF2_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF2_O_IPV4_VER = 13,
+	BNXT_ULP_HF2_O_IPV4_TOS = 14,
+	BNXT_ULP_HF2_O_IPV4_LEN = 15,
+	BNXT_ULP_HF2_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF2_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF2_O_IPV4_TTL = 18,
+	BNXT_ULP_HF2_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF2_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF2_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF2_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF2_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF2_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF2_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF2_O_UDP_CSUM = 26
+};
+
+#endif /* _ULP_HDR_FIELD_ENUMS_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 2b0a3d7..e28d049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,9 +17,21 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Structure to store the protocol fields */
+#define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
+struct ulp_rte_hdr_field {
+	uint8_t		spec[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint8_t		mask[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint32_t	size;
+};
+
+struct ulp_rte_act_bitmap {
+	uint64_t	bits;
+};
+
 /*
- * structure to hold the action property details
- * It is a array of 128 bytes
+ * Structure to hold the action property details.
+ * It is a array of 128 bytes.
  */
 struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
@@ -39,6 +51,35 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_class_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	uint32_t	table_type;
+	uint8_t		direction;
+	uint8_t		mem;
+	uint32_t	priority;
+	uint8_t		srch_b4_alloc;
+	uint32_t	critical_resource;
+
+	/* Information for accessing the ulp_key_field_list */
+	uint32_t	key_start_idx;
+	uint16_t	key_bit_size;
+	uint16_t	key_num_fields;
+	/* Size of the blob that holds the key */
+	uint16_t	blob_key_bit_size;
+
+	/* Information for accessing the ulp_class_result_field_list */
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	result_num_fields;
+
+	/* Information for accessing the ulp_ident_list */
+	uint32_t	ident_start_idx;
+	uint16_t	ident_nums;
+
+	uint8_t		mark_enable;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
 struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	enum tf_tbl_type table_type;
@@ -52,6 +93,15 @@ struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_regfile_index	regfile_wr_idx;
 };
 
+struct bnxt_ulp_mapper_class_key_field_info {
+	uint8_t			name[64];
+	enum bnxt_ulp_mask_opc	mask_opcode;
+	enum bnxt_ulp_spec_opc	spec_opcode;
+	uint16_t		field_bit_size;
+	uint8_t			mask_operand[16];
+	uint8_t			spec_operand[16];
+};
+
 struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				name[64];
 	enum bnxt_ulp_result_opc	result_opcode;
@@ -59,14 +109,36 @@ struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				result_operand[16];
 };
 
+struct bnxt_ulp_mapper_ident_info {
+	uint8_t		name[64];
+	uint32_t	resource_func;
+
+	uint16_t	ident_type;
+	uint16_t	ident_bit_size;
+	uint16_t	ident_bit_pos;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+/*
+ * Flow Mapper Static Data Externs:
+ * Access to the below static data should be done through access functions and
+ * directly throughout the code.
+ */
+
 /*
- * The ulp_device_params is indexed by the dev_id
- * This table maintains the device specific parameters
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
  * The ulp_data_field_list provides the instructions for creating an action
+ * records such as tcam/em results.
+ */
+extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
+
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
  * record.  It uses the same structure as the result list, but is only used for
  * actions.
  */
@@ -75,6 +147,19 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
 
 /*
  * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * tcam and em tables.
+ */
+extern
+struct bnxt_ulp_mapper_class_key_field_info	ulp_class_key_field_list[];
+
+/*
+ * The ulp_ident_list provides the instructions for creating identifiers such
+ * as profile ids.
+ */
+extern struct bnxt_ulp_mapper_ident_info	ulp_ident_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
  * properties.
  */
 extern uint32_t ulp_act_prop_map_table[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 20/33] net/bnxt: add support to free key and action tables
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (18 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 19/33] net/bnxt: add support to process key tables Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 21/33] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
                   ` (13 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets all the flow resources from the flow id
2. Frees all the table resources
3. Frees the flow in the flow table

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c  | 199 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h  |  30 +++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c   | 193 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h   |  15 +++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  23 +++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 +++
 6 files changed, 476 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 0e7b433..8449db3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -23,6 +23,32 @@
 #define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
 
 /*
+ * Helper function to set the bit in the active flow table
+ * No validation is done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ * flag [in] 1 to set and 0 to reset.
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_active_flow_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			    uint32_t			idx,
+			    uint32_t			flag)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	if (flag)
+		ULP_INDEX_BITMAP_SET(flow_tbl->active_flow_tbl[active_index],
+				     idx);
+	else
+		ULP_INDEX_BITMAP_RESET(flow_tbl->active_flow_tbl[active_index],
+				       idx);
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  *  is set.No validation being done in this function.
  *
@@ -71,6 +97,35 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
 }
 
 /*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [in] Ptr to resource information
+ * params [out] The output params to the caller
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	memset(params, 0, sizeof(struct ulp_flow_db_res_params));
+	params->direction = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_DIR_MASK) >>
+				 ULP_FLOW_DB_RES_DIR_BIT);
+	params->resource_func = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_FUNC_MASK) >>
+				 ULP_FLOW_DB_RES_FUNC_BITS);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		params->resource_hndl = resource_info->resource_hndl;
+		params->resource_type = resource_info->resource_type;
+	} else {
+		params->resource_hndl = resource_info->resource_em_handle;
+	}
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  * the stack for allocation operations.
  *
@@ -122,7 +177,7 @@ ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
 }
 
 /*
- * Helper function to de allocate the flow table.
+ * Helper function to deallocate the flow table.
  *
  * flow_db [in] Ptr to flow database structure
  * tbl_idx [in] The index to table creation.
@@ -321,3 +376,145 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Onlythe critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*nxt_resource, *fid_resource;
+	uint32_t			nxt_idx = 0;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx < 0 || tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	fid_resource = &flow_tbl->flow_resources[fid];
+	if (!params->critical_resource) {
+		/* Not the critical resource so free the resource */
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		if (!nxt_idx) {
+			/* reached end of resources */
+			return -ENOENT;
+		}
+		nxt_resource = &flow_tbl->flow_resources[nxt_idx];
+
+		/* connect the fid resource to the next resource */
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_resource->nxt_resource_idx);
+
+		/* update the contents to be given to caller */
+		ulp_flow_db_res_info_to_params(nxt_resource, params);
+
+		/* Delete the nxt_resource */
+		memset(nxt_resource, 0, sizeof(struct ulp_fdb_resource_info));
+
+		/* add it to the free list */
+		flow_tbl->tail_index++;
+		if (flow_tbl->tail_index >= flow_tbl->num_resources) {
+			BNXT_TF_DBG(ERR, "FlowDB:Tail reached max\n");
+			return -ENOENT;
+		}
+		flow_tbl->flow_tbl_stack[flow_tbl->tail_index] = nxt_idx;
+
+	} else {
+		/* Critical resource. copy the contents and exit */
+		ulp_flow_db_res_info_to_params(fid_resource, params);
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		memset(fid_resource, 0, sizeof(struct ulp_fdb_resource_info));
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_idx);
+	}
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx < 0 || tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for limits of fid */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+	flow_tbl->head_index--;
+	if (!flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "FlowDB: Head Ptr is zero\n");
+		return -ENOENT;
+	}
+	flow_tbl->flow_tbl_stack[flow_tbl->head_index] = fid;
+	ulp_flow_db_active_flow_set(flow_tbl, fid, 0);
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index f6055a5..20109b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -99,4 +99,34 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 				 uint32_t			fid,
 				 struct ulp_flow_db_res_params	*params);
 
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Only the critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a041394..cc85557 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -143,6 +143,87 @@ ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
 	return &ulp_ident_list[idx];
 }
 
+static inline int32_t
+ulp_mapper_tcam_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			   struct tf *tfp,
+			   struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tcam_entry_parms fparms = {
+		.dir		= res->direction,
+		.tcam_tbl_type	= res->resource_type,
+		.idx		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_tcam_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			    struct tf *tfp,
+			    struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tbl_entry_parms fparms = {
+		.dir	= res->direction,
+		.type	= res->resource_type,
+		.idx	= (uint32_t)res->resource_hndl
+	};
+
+	return tf_free_tbl_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
+			  struct tf *tfp,
+			  struct ulp_flow_db_res_params *res)
+{
+	struct tf_delete_em_entry_parms fparms = { 0 };
+	int32_t rc;
+
+	fparms.dir		= res->direction;
+	fparms.mem		= TF_MEM_EXTERNAL;
+	fparms.flow_handle	= res->resource_hndl;
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_em_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_ident_free(struct bnxt_ulp_context *ulp __rte_unused,
+		      struct tf *tfp,
+		      struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_identifier_parms fparms = {
+		.dir		= res->direction,
+		.ident_type	= res->resource_type,
+		.id		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_identifier(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_mark_free(struct bnxt_ulp_context *ulp,
+		     struct ulp_flow_db_res_params *res)
+{
+	uint32_t flag;
+	uint32_t fid;
+	uint32_t gfid;
+
+	fid	  = (uint32_t)res->resource_hndl;
+	TF_GET_FLAG_FROM_FLOW_ID(fid, flag);
+	TF_GET_GFID_FROM_FLOW_ID(fid, gfid);
+
+	return ulp_mark_db_mark_del(ulp,
+				    (flag == TF_GFID_TABLE_EXTERNAL),
+				    gfid,
+				    0);
+}
+
 static int32_t
 ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
 			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1131,3 +1212,115 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+static int32_t
+ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
+			 struct ulp_flow_db_res_params *res)
+{
+	struct tf *tfp;
+	int32_t	rc;
+
+	if (!res || !ulp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource\n ");
+		return -EINVAL;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource failed to get tfp\n");
+		return -EINVAL;
+	}
+
+	switch (res->resource_func) {
+	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
+		rc = ulp_mapper_ident_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_HW_FID:
+		rc = ulp_mapper_mark_free(ulp, res);
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type)
+{
+	struct ulp_flow_db_res_params	res_parms = { 0 };
+	int32_t				rc, trc;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the critical resource on the first resource del, then iterate
+	 * while status is good
+	 */
+	res_parms.critical_resource = 1;
+	rc = ulp_flow_db_resource_del(ulp_ctx, tbl_type, fid, &res_parms);
+
+	if (rc) {
+		/*
+		 * This is unexpected on the first call to resource del.
+		 * It likely means that the flow did not exist in the flow db.
+		 */
+		BNXT_TF_DBG(ERR, "Flow[%d][0x%08x] failed to free (rc=%d)\n",
+			    tbl_type, fid, rc);
+		return rc;
+	}
+
+	while (!rc) {
+		trc = ulp_mapper_resource_free(ulp_ctx, &res_parms);
+		if (trc)
+			/*
+			 * On fail, we still need to attempt to free the
+			 * remaining resources.  Don't return
+			 */
+			BNXT_TF_DBG(ERR,
+				    "Flow[%d][0x%x] Res[%d][0x%016" PRIx64
+				    "] failed rc=%d.\n",
+				    tbl_type, fid, res_parms.resource_func,
+				    res_parms.resource_hndl, trc);
+
+		/* All subsequent call require the critical_resource be zero */
+		res_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_del(ulp_ctx,
+					      tbl_type,
+					      fid,
+					      &res_parms);
+	}
+
+	/* Free the Flow ID since we've removed all resources */
+	rc = ulp_flow_db_fid_free(ulp_ctx, tbl_type, fid);
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+{
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	return ulp_mapper_resources_free(ulp_ctx,
+					 fid,
+					 BNXT_ULP_REGULAR_FLOW_TABLE);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index adbcec2..8655728 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -15,6 +15,8 @@
 #include "bnxt_ulp.h"
 #include "ulp_utils.h"
 
+#define ULP_SZ_BITS2BYTES(x) (((x) + 7) / 8)
+
 /* Internal Structure for passing the arguments around */
 struct bnxt_ulp_mapper_parms {
 	uint32_t				dev_id;
@@ -36,4 +38,17 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/* Function that frees all resources associated with the flow. */
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+
+/*
+ * Function that frees all resources and can be called on default or regular
+ * flows
+ */
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type);
+
 #endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 837064e..566668e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -135,7 +135,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_max,
 		    mark_tbl->gfid_mask);
 
-	/* Add the mart tbl to the ulp context. */
+	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 
 	return 0;
@@ -195,3 +195,24 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 {
 	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
 }
+
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark  __rte_unused)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, ULP_MARK_INVALID);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 18abea4..f0d1515 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -72,4 +72,22 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		     uint32_t gfid,
 		     uint32_t mark);
 
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 21/33] net/bnxt: add support to alloc and program key and act tbls
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (19 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 20/33] net/bnxt: add support to free key and action tables Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 22/33] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
                   ` (12 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets the action tables information from the action template id
2. Gets the class tables information from the class template id
3. Initializes the registry file
4. Allocates a flow id from the flow table
5. Process the class & action tables

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |  37 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  13 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 196 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  15 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  22 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  31 +++-
 7 files changed, 310 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 8449db3..76ec856 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -303,6 +303,43 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 }
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	*fid = 0; /* Initialize fid to invalid value */
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+	/* check for max flows */
+	if (flow_tbl->num_flows <= flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "Flow database has reached max flows\n");
+		return -ENOMEM;
+	}
+	*fid = flow_tbl->flow_tbl_stack[flow_tbl->head_index];
+	flow_tbl->head_index++;
+	ulp_flow_db_active_flow_set(flow_tbl, *fid, 1);
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index 20109b9..eb5effa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -84,6 +84,19 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid);
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index cc85557..852d9e3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -16,12 +16,6 @@
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 
-int32_t
-ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
-int32_t
-ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
 /*
  * Get the size of the action property for a given index.
  *
@@ -38,7 +32,76 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 }
 
 /*
- * Get the list of result fields that implement the flow action
+ * Get the list of result fields that implement the flow action.
+ * Gets a device dependent list of tables that implement the action template id.
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The action template id that matches the flow
+ *
+ * num_tbls [out] The number of action tables in the returned array
+ *
+ * Returns An array of action tables to implement the flow, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_act_tbl_info *
+ulp_mapper_action_tbl_list_get(uint32_t dev_id,
+			       uint32_t tid,
+			       uint32_t *num_tbls)
+{
+	uint32_t	idx;
+	uint32_t	tidx;
+
+	if (!num_tbls) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+
+	/* template shift and device mask */
+	tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and act_tid
+	 */
+	idx		= ulp_act_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_act_tmpl_list[tidx].num_tbls;
+
+	return &ulp_act_tbl_list[idx];
+}
+
+/** Get a list of classifier tables that implement the flow
+ * Gets a device dependent list of tables that implement the class template id
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The template id that matches the flow
+ *
+ * num_tbls [out] The number of classifier tables in the returned array
+ *
+ * returns An array of classifier tables to implement the flow, or NULL on
+ * error
+ */
+static struct bnxt_ulp_mapper_class_tbl_info *
+ulp_mapper_class_tbl_list_get(uint32_t dev_id,
+			      uint32_t tid,
+			      uint32_t *num_tbls)
+{
+	uint32_t idx;
+	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	if (!num_tbls)
+		return NULL;
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and tid
+	 */
+	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
+
+	return &ulp_class_tbl_list[idx];
+}
+
+/*
+ * Get the list of key fields that implement the flow.
  *
  * ctxt [in] The ulp context
  *
@@ -46,7 +109,7 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
  *
  * num_flds [out] The number of key fields in the returned array
  *
- * returns array of Key fields, or NULL on error
+ * Returns array of Key fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_class_key_field_info *
 ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1147,7 +1210,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
  */
-int32_t
+static int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1169,7 +1232,7 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 }
 
 /* Create the classifier table entries for a flow. */
-int32_t
+static int32_t
 ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1324,3 +1387,116 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 fid,
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
+
+/* Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
+		       uint32_t app_priority __rte_unused,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap __rte_unused,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act_bitmap,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t class_tid,
+		       uint32_t act_tid,
+		       uint32_t *flow_id)
+{
+	struct ulp_regfile		regfile;
+	struct bnxt_ulp_mapper_parms	parms;
+	struct bnxt_ulp_device_params	*device_params;
+	int32_t				rc, trc;
+
+	/* Initialize the parms structure */
+	memset(&parms, 0, sizeof(parms));
+	parms.act_prop = act_prop;
+	parms.act_bitmap = act_bitmap;
+	parms.regfile = &regfile;
+	parms.hdr_field = hdr_field;
+	parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	parms.ulp_ctx = ulp_ctx;
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	/* Get the action table entry from device id and act context id */
+	parms.act_tid = act_tid;
+	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+						     parms.act_tid,
+						     &parms.num_atbls);
+	if (!parms.atbls || !parms.num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		return -EINVAL;
+	}
+
+	/* Get the class table entry from device id and act context id */
+	parms.class_tid = class_tid;
+	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+						    parms.class_tid,
+						    &parms.num_ctbls);
+	if (!parms.ctbls || !parms.num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+
+	/* Get the byte order for the further processing from device params */
+	device_params = bnxt_ulp_device_params_get(parms.dev_id);
+	if (!device_params) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+	parms.order = device_params->byte_order;
+	parms.encap_byte_swap = device_params->encap_byte_swap;
+
+	/* initialize the registry file for further processing */
+	if (!ulp_regfile_init(parms.regfile)) {
+		BNXT_TF_DBG(ERR, "regfile initialization failed.\n");
+		return -EINVAL;
+	}
+
+	/* Allocate a Flow ID for attaching all resources for the flow to.
+	 * Once allocated, all errors have to walk the list of resources and
+	 * free each of them.
+	 */
+	rc = ulp_flow_db_fid_alloc(ulp_ctx,
+				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   &parms.fid);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n");
+		return rc;
+	}
+
+	/* Process the action template list from the selected action table*/
+	rc = ulp_mapper_action_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		goto flow_error;
+	}
+
+	/* All good. Now process the class template */
+	rc = ulp_mapper_class_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "class tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		goto flow_error;
+	}
+
+	*flow_id = parms.fid;
+
+	return rc;
+
+flow_error:
+	/* Free all resources that were allocated during flow creation */
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 8655728..5f3d46e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -38,6 +38,21 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/*
+ * Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
+		       uint32_t		app_priority,
+		       struct ulp_rte_hdr_bitmap  *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t		class_tid,
+		       uint32_t		act_tid,
+		       uint32_t		*flow_id);
+
 /* Function that frees all resources associated with the flow. */
 int32_t
 ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index ba06493..5ec7adc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -1004,6 +1004,28 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_act_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.table_type = TF_TBL_TYPE_EXT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 733836a..957b21a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -12,6 +12,7 @@
 #define ULP_TEMPLATE_DB_H_
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
+#define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -127,6 +128,12 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_search_before_alloc {
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_NO = 0,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_YES = 1,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_LAST = 2
+};
+
 enum bnxt_ulp_spec_opc {
 	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index e28d049..b7094c5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,10 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+struct ulp_rte_hdr_bitmap {
+	uint64_t	bits;
+};
+
 /* Structure to store the protocol fields */
 #define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
 struct ulp_rte_hdr_field {
@@ -51,6 +55,13 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+/* Flow Mapper */
+struct bnxt_ulp_mapper_tbl_list_info {
+	uint32_t	device_name;
+	uint32_t	start_tbl_idx;
+	uint32_t	num_tbls;
+};
+
 struct bnxt_ulp_mapper_class_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t	table_type;
@@ -132,7 +143,25 @@ struct bnxt_ulp_mapper_ident_info {
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
- * The ulp_data_field_list provides the instructions for creating an action
+ * The ulp_class_tmpl_list and ulp_act_tmpl_list are indexed by the dev_id
+ * and template id (either class or action) returned by the matcher.
+ * The result provides the start index and number of entries in the connected
+ * ulp_class_tbl_list/ulp_act_tbl_list.
+ */
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_class_tmpl_list[];
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_act_tmpl_list[];
+
+/*
+ * The ulp_class_tbl_list and ulp_act_tbl_list are indexed based on the results
+ * of the template lists.  Each entry describes the high level details of the
+ * table entry to include the start index and number of instructions in the
+ * field lists.
+ */
+extern struct bnxt_ulp_mapper_class_tbl_info	ulp_class_tbl_list[];
+extern struct bnxt_ulp_mapper_act_tbl_info	ulp_act_tbl_list[];
+
+/*
+ * The ulp_class_result_field_list provides the instructions for creating result
  * records such as tcam/em results.
  */
 extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 22/33] net/bnxt: match rte flow items with flow template patterns
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (20 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 21/33] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 23/33] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
                   ` (11 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes hdr_bitmap generated from the rte_flow_items
2. Iterates through the static hdr_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  12 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 152 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  26 +++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 115 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  40 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  21 ++++
 7 files changed, 367 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 3a3dad4..9776987 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mapper.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_matcher.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index 3516df4..e4ebfc5 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -25,6 +25,18 @@
 #define	BNXT_ULP_TX_NUM_FLOWS			32
 #define	BNXT_ULP_TX_TBL_IF_ID			0
 
+enum bnxt_tf_rc {
+	BNXT_TF_RC_PARSE_ERR	= -2,
+	BNXT_TF_RC_ERROR	= -1,
+	BNXT_TF_RC_SUCCESS	= 0
+};
+
+/* ulp direction Type */
+enum ulp_direction_type {
+	ULP_DIR_INGRESS,
+	ULP_DIR_EGRESS,
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
new file mode 100644
index 0000000..f367e4c
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_matcher.h"
+#include "ulp_utils.h"
+
+/* Utility function to check if bitmap is zero */
+static inline
+int ulp_field_mask_is_zero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is all ones */
+static inline int
+ulp_field_mask_is_ones(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0xFF)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is non zero */
+static inline int
+ulp_field_mask_notzero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 1;
+		bitmap++;
+	}
+	return 0;
+}
+
+/* Utility function to mask the computed and internal proto headers. */
+static void
+ulp_matcher_hdr_fields_normalize(struct ulp_rte_hdr_bitmap *hdr1,
+				 struct ulp_rte_hdr_bitmap *hdr2)
+{
+	/* copy the contents first */
+	rte_memcpy(hdr2, hdr1, sizeof(struct ulp_rte_hdr_bitmap));
+
+	/* reset the computed fields */
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_SVIF);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L4);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L4);
+}
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type   dir,
+			  struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			  struct ulp_rte_hdr_field  *hdr_field,
+			  struct ulp_rte_act_bitmap *act_bitmap,
+			  uint32_t		    *class_id)
+{
+	struct bnxt_ulp_header_match_info	*sel_hdr_match;
+	uint32_t				hdr_num, idx, jdx;
+	uint32_t				match = 0;
+	struct ulp_rte_hdr_bitmap		hdr_bitmap_masked;
+	uint32_t				start_idx;
+	struct ulp_rte_hdr_field		*m_field;
+	struct bnxt_ulp_matcher_field_info	*sf;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_hdr_match = ulp_ingress_hdr_match_list;
+		hdr_num = BNXT_ULP_INGRESS_HDR_MATCH_SZ;
+	} else {
+		sel_hdr_match = ulp_egress_hdr_match_list;
+		hdr_num = BNXT_ULP_EGRESS_HDR_MATCH_SZ;
+	}
+
+	/* Remove the hdr bit maps that are internal or computed */
+	ulp_matcher_hdr_fields_normalize(hdr_bitmap, &hdr_bitmap_masked);
+
+	/* Loop through the list of class templates to find the match */
+	for (idx = 0; idx < hdr_num; idx++, sel_hdr_match++) {
+		if (ULP_BITSET_CMP(&sel_hdr_match->hdr_bitmap,
+				   &hdr_bitmap_masked)) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Pattern Match failed template=%d\n",
+				    idx);
+			continue;
+		}
+		match = ULP_BITMAP_ISSET(act_bitmap->bits,
+					 BNXT_ULP_ACTION_BIT_VNIC);
+		if (match != sel_hdr_match->act_vnic) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Vnic Match failed template=%d\n",
+				    idx);
+			continue;
+		} else {
+			match = 1;
+		}
+
+		/* Found a matching hdr bitmap, match the fields next */
+		start_idx = sel_hdr_match->start_idx;
+		for (jdx = 0; jdx < sel_hdr_match->num_entries; jdx++) {
+			m_field = &hdr_field[jdx + BNXT_ULP_HDR_FIELD_LAST - 1];
+			sf = &ulp_field_match[start_idx + jdx];
+			switch (sf->mask_opcode) {
+			case BNXT_ULP_FMF_MASK_ANY:
+				match &= ulp_field_mask_is_zero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_EXACT:
+				match &= ulp_field_mask_is_ones(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_WILDCARD:
+				match &= ulp_field_mask_notzero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_IGNORE:
+			default:
+				break;
+			}
+			if (!match)
+				break;
+		}
+		if (match) {
+			BNXT_TF_DBG(DEBUG,
+				    "Found matching pattern template %d\n",
+				    sel_hdr_match->class_tmpl_id);
+			*class_id = sel_hdr_match->class_tmpl_id;
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching template\n");
+	*class_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
new file mode 100644
index 0000000..57a161d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef ULP_MATCHER_H_
+#define ULP_MATCHER_H_
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
+			  struct ulp_rte_hdr_bitmap	   *hdr_bitmap,
+			  struct ulp_rte_hdr_field	   *hdr_field,
+			  struct ulp_rte_act_bitmap	   *act_bitmap,
+			  uint32_t			   *class_id);
+
+#endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 5ec7adc..68a2dc0 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -796,6 +796,121 @@ struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
 	}
 };
 
+struct bnxt_ulp_header_match_info ulp_ingress_hdr_match_list[] = {
+	{
+	.hdr_bitmap = { .bits =
+		BNXT_ULP_HDR_BIT_O_ETH |
+		BNXT_ULP_HDR_BIT_O_IPV4 |
+		BNXT_ULP_HDR_BIT_O_UDP },
+	.start_idx = 0,
+	.num_entries = 24,
+	.class_tmpl_id = 0,
+	.act_vnic = 0
+	}
+};
+
+struct bnxt_ulp_header_match_info ulp_egress_hdr_match_list[] = {
+};
+
+struct bnxt_ulp_matcher_field_info ulp_field_match[] = {
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	{
 	.field_bit_size = 10,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 957b21a..319500a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,8 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
+#define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -45,6 +47,31 @@ enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
 };
 
+enum bnxt_ulp_hdr_bit {
+	BNXT_ULP_HDR_BIT_SVIF                = 0x0000000000000001,
+	BNXT_ULP_HDR_BIT_O_ETH               = 0x0000000000000002,
+	BNXT_ULP_HDR_BIT_OO_VLAN             = 0x0000000000000004,
+	BNXT_ULP_HDR_BIT_OI_VLAN             = 0x0000000000000008,
+	BNXT_ULP_HDR_BIT_O_L3                = 0x0000000000000010,
+	BNXT_ULP_HDR_BIT_O_IPV4              = 0x0000000000000020,
+	BNXT_ULP_HDR_BIT_O_IPV6              = 0x0000000000000040,
+	BNXT_ULP_HDR_BIT_O_L4                = 0x0000000000000080,
+	BNXT_ULP_HDR_BIT_O_TCP               = 0x0000000000000100,
+	BNXT_ULP_HDR_BIT_O_UDP               = 0x0000000000000200,
+	BNXT_ULP_HDR_BIT_T_VXLAN             = 0x0000000000000400,
+	BNXT_ULP_HDR_BIT_T_GRE               = 0x0000000000000800,
+	BNXT_ULP_HDR_BIT_I_ETH               = 0x0000000000001000,
+	BNXT_ULP_HDR_BIT_IO_VLAN             = 0x0000000000002000,
+	BNXT_ULP_HDR_BIT_II_VLAN             = 0x0000000000004000,
+	BNXT_ULP_HDR_BIT_I_L3                = 0x0000000000008000,
+	BNXT_ULP_HDR_BIT_I_IPV4              = 0x0000000000010000,
+	BNXT_ULP_HDR_BIT_I_IPV6              = 0x0000000000020000,
+	BNXT_ULP_HDR_BIT_I_L4                = 0x0000000000040000,
+	BNXT_ULP_HDR_BIT_I_TCP               = 0x0000000000080000,
+	BNXT_ULP_HDR_BIT_I_UDP               = 0x0000000000100000,
+	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -67,12 +94,25 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_fmf_spec {
+	BNXT_ULP_FMF_SPEC_IGNORE = 0,
+	BNXT_ULP_FMF_SPEC_LAST = 1
+};
+
 enum bnxt_ulp_mark_enable {
 	BNXT_ULP_MARK_ENABLE_NO = 0,
 	BNXT_ULP_MARK_ENABLE_YES = 1,
 	BNXT_ULP_MARK_ENABLE_LAST = 2
 };
 
+enum bnxt_ulp_hdr_field {
+	BNXT_ULP_HDR_FIELD_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HDR_FIELD_O_VTAG_NUM = 1,
+	BNXT_ULP_HDR_FIELD_I_VTAG_NUM = 2,
+	BNXT_ULP_HDR_FIELD_SVIF_INDEX = 3,
+	BNXT_ULP_HDR_FIELD_LAST = 4
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index b7094c5..dd06fb1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -29,6 +29,11 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+struct bnxt_ulp_matcher_field_info {
+	enum bnxt_ulp_fmf_mask	mask_opcode;
+	enum bnxt_ulp_fmf_spec	spec_opcode;
+};
+
 struct ulp_rte_act_bitmap {
 	uint64_t	bits;
 };
@@ -41,6 +46,22 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Matcher structures */
+struct bnxt_ulp_header_match_info {
+	struct ulp_rte_hdr_bitmap		hdr_bitmap;
+	uint32_t				start_idx;
+	uint32_t				num_entries;
+	uint32_t				class_tmpl_id;
+	uint32_t				act_vnic;
+};
+
+/* Flow Matcher templates Structure Array defined in template source*/
+extern struct bnxt_ulp_header_match_info  ulp_ingress_hdr_match_list[];
+extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
+
+/* Flow field match Information Structure Array defined in template source*/
+extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 23/33] net/bnxt: match rte flow actions with flow template actions
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (21 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 22/33] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 24/33] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
                   ` (10 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes act_bitmap generated from the rte_flow_actions
2. Iterates through the static act_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 36 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  9 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 12 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  2 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 10 ++++++++
 5 files changed, 69 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
index f367e4c..040d08d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -150,3 +150,39 @@ ulp_matcher_pattern_match(enum ulp_direction_type   dir,
 	*class_id = 0;
 	return BNXT_TF_RC_ERROR;
 }
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type		dir,
+			 struct ulp_rte_act_bitmap		*act_bitmap,
+			 uint32_t				*act_id)
+{
+	struct bnxt_ulp_action_match_info	*sel_act_match;
+	uint32_t				act_num, idx;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_act_match = ulp_ingress_act_match_list;
+		act_num = BNXT_ULP_INGRESS_ACT_MATCH_SZ;
+	} else {
+		sel_act_match = ulp_egress_act_match_list;
+		act_num = BNXT_ULP_EGRESS_ACT_MATCH_SZ;
+	}
+
+	/* Loop through the list of action templates to find the match */
+	for (idx = 0; idx < act_num; idx++, sel_act_match++) {
+		if (!ULP_BITSET_CMP(&sel_act_match->act_bitmap,
+				    act_bitmap)) {
+			*act_id = sel_act_match->act_tmpl_id;
+			BNXT_TF_DBG(DEBUG, "Found matching action template %u\n",
+				    *act_id);
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching action template\n");
+	*act_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
index 57a161d..c818bbe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -23,4 +23,13 @@ ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
 			  struct ulp_rte_act_bitmap	   *act_bitmap,
 			  uint32_t			   *class_id);
 
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type	dir,
+			 struct ulp_rte_act_bitmap	*act_bitmap,
+			 uint32_t			*act_id);
+
 #endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 68a2dc0..5981c74 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -1119,6 +1119,18 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_action_match_info ulp_ingress_act_match_list[] = {
+	{
+	.act_bitmap = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS },
+	.act_tmpl_id = 0
+	}
+};
+
+struct bnxt_ulp_action_match_info ulp_egress_act_match_list[] = {
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 319500a..f4850bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -15,6 +15,8 @@
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
 #define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
+#define BNXT_ULP_INGRESS_ACT_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_ACT_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index dd06fb1..0e811ec 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -62,6 +62,16 @@ extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
 /* Flow field match Information Structure Array defined in template source*/
 extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
 
+/* Flow Matcher Action structures */
+struct bnxt_ulp_action_match_info {
+	struct ulp_rte_act_bitmap		act_bitmap;
+	uint32_t				act_tmpl_id;
+};
+
+/* Flow Matcher templates Structure Array defined in template source */
+extern struct bnxt_ulp_action_match_info  ulp_ingress_act_match_list[];
+extern struct bnxt_ulp_action_match_info  ulp_egress_act_match_list[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 24/33] net/bnxt: add support for rte flow item parsing
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (22 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 23/33] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 25/33] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
                   ` (9 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

1. Registers a callback handler for each rte_flow_item type, if it
   is supported
2. Iterates through each rte_flow_item till RTE_FLOW_ITEM_TYPE_END
3. Invokes the header call back handler
4. Each header call back handler will populate the respective fields
   in hdr_field & hdr_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 767 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      | 120 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 197 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  26 +
 6 files changed, 1118 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 9776987..29d45e7 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -64,6 +64,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_matcher.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_rte_parser.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
new file mode 100644
index 0000000..3ffdcbd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -0,0 +1,767 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_utils.h"
+#include "tfp.h"
+
+/* Inline Func to read integer that is stored in big endian format */
+static inline void ulp_util_field_int_read(uint8_t *buffer,
+					   uint32_t *val)
+{
+	uint32_t temp_val;
+
+	memcpy(&temp_val, buffer, sizeof(uint32_t));
+	*val = rte_be_to_cpu_32(temp_val);
+}
+
+/* Inline Func to write integer that is stored in big endian format */
+static inline void ulp_util_field_int_write(uint8_t *buffer,
+					    uint32_t val)
+{
+	uint32_t temp_val = rte_cpu_to_be_32(val);
+
+	memcpy(buffer, &temp_val, sizeof(uint32_t));
+}
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field *hdr_field)
+{
+	const struct rte_flow_item *item = pattern;
+	uint32_t field_idx = BNXT_ULP_HDR_FIELD_LAST;
+	uint32_t vlan_idx = 0;
+	struct bnxt_ulp_rte_hdr_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (item && item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_hdr_info[item->type];
+		if (hdr_info->hdr_type ==
+		    BNXT_ULP_HDR_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support type %d\n",
+				    item->type);
+			return BNXT_TF_RC_PARSE_ERR;
+		} else if (hdr_info->hdr_type ==
+			   BNXT_ULP_HDR_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_hdr_func) {
+				if (hdr_info->proto_hdr_func(item,
+							     hdr_bitmap,
+							     hdr_field,
+							     &field_idx,
+							     &vlan_idx) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+static int32_t
+ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			enum rte_flow_item_type proto,
+			uint32_t svif,
+			uint32_t mask)
+{
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF)) {
+		BNXT_TF_DBG(ERR,
+			    "SVIF already set,"
+			    " multiple sources not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* TBD: Check for any mapping errors for svif */
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_SVIF. */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	if (proto != RTE_FLOW_ITEM_TYPE_PF) {
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec,
+		       &svif, sizeof(svif));
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask,
+		       &mask, sizeof(mask));
+		hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	}
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, 0, 0);
+}
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field	 *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vf *vf_spec, *vf_mask;
+	uint32_t svif = 0, mask = 0;
+
+	vf_spec = item->spec;
+	vf_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields.
+	 */
+	if (vf_spec)
+		svif = vf_spec->id;
+	if (vf_mask)
+		mask = vf_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item port id  Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item *item,
+			    struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			    struct ulp_rte_hdr_field *hdr_field,
+			    uint32_t *field_idx __rte_unused,
+			    uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_port_id *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for Port into hdr_field using port id
+	 * header fields.
+	 */
+	if (port_spec)
+		svif = port_spec->id;
+	if (port_mask)
+		mask = port_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item phy port Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
+			     struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			     struct ulp_rte_hdr_field *hdr_field,
+			     uint32_t *field_idx __rte_unused,
+			     uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_phy_port *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/* Copy the rte_flow_item for phy port into hdr_field */
+	if (port_spec)
+		svif = port_spec->index;
+	if (port_mask)
+		mask = port_mask->index;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+	uint64_t set_flag = 0;
+
+	eth_spec = item->spec;
+	eth_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields
+	 */
+	if (eth_spec) {
+		hdr_field[idx].size = sizeof(eth_spec->dst.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->dst.addr_bytes,
+		       sizeof(eth_spec->dst.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->src.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->src.addr_bytes,
+		       sizeof(eth_spec->src.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->type);
+		memcpy(hdr_field[idx++].spec, &eth_spec->type,
+		       sizeof(eth_spec->type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_ETH_NUM;
+	}
+
+	if (eth_mask) {
+		memcpy(hdr_field[mdx++].mask, eth_mask->dst.addr_bytes,
+		       sizeof(eth_mask->dst.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, eth_mask->src.addr_bytes,
+		       sizeof(eth_mask->src.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, &eth_mask->type,
+		       sizeof(eth_mask->type));
+	}
+	/* Add number of vlan header elements */
+	*field_idx = idx + BNXT_ULP_PROTO_HDR_VLAN_NUM;
+	*vlan_idx = idx;
+
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_I_ETH */
+	set_flag = ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+	if (set_flag)
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+	else
+		ULP_BITMAP_RESET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+
+	/* update the hdr_bitmap with BNXT_ULP_HDR_PROTO_O_ETH */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	uint32_t idx = *vlan_idx;
+	uint32_t mdx = *vlan_idx;
+	uint16_t vlan_tag, priority;
+	uint32_t outer_vtag_num = 0, inner_vtag_num = 0;
+	uint8_t *outer_tag_buffer;
+	uint8_t *inner_tag_buffer;
+
+	vlan_spec = item->spec;
+	vlan_mask = item->mask;
+	outer_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].spec;
+	inner_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].spec;
+
+	/*
+	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
+	 * header fields
+	 */
+	if (vlan_spec) {
+		vlan_tag = ntohs(vlan_spec->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		hdr_field[idx].size = sizeof(priority);
+		memcpy(hdr_field[idx++].spec, &priority, sizeof(priority));
+		hdr_field[idx].size = sizeof(vlan_tag);
+		memcpy(hdr_field[idx++].spec, &vlan_tag, sizeof(vlan_tag));
+		hdr_field[idx].size = sizeof(vlan_spec->inner_type);
+		memcpy(hdr_field[idx++].spec, &vlan_spec->inner_type,
+		       sizeof(vlan_spec->inner_type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_S_VLAN_NUM;
+	}
+
+	if (vlan_mask) {
+		vlan_tag = ntohs(vlan_mask->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		memcpy(hdr_field[mdx++].mask, &priority, sizeof(priority));
+		memcpy(hdr_field[mdx++].mask, &vlan_tag, sizeof(vlan_tag));
+		memcpy(hdr_field[mdx++].mask, &vlan_mask->inner_type,
+		       sizeof(vlan_mask->inner_type));
+	}
+	/* Set the vlan index to new incremented value */
+	*vlan_idx = idx;
+
+	/* Get the outer tag and inner tag counts */
+	ulp_util_field_int_read(outer_tag_buffer, &outer_vtag_num);
+	ulp_util_field_int_read(inner_tag_buffer, &inner_vtag_num);
+
+	/* Update the hdr_bitmap of the vlans */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+	    !ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_OI_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_IO_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_IO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_II_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else {
+		BNXT_TF_DBG(ERR, "Error Parsing:Vlan hdr found withtout eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv4_spec = item->spec;
+	ipv4_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv4_spec) {
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.version_ihl);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.version_ihl,
+		       sizeof(ipv4_spec->hdr.version_ihl));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.type_of_service);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.type_of_service,
+		       sizeof(ipv4_spec->hdr.type_of_service));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.total_length);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.total_length,
+		       sizeof(ipv4_spec->hdr.total_length));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.packet_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.packet_id,
+		       sizeof(ipv4_spec->hdr.packet_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.fragment_offset);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.fragment_offset,
+		       sizeof(ipv4_spec->hdr.fragment_offset));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.time_to_live);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.time_to_live,
+		       sizeof(ipv4_spec->hdr.time_to_live));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.next_proto_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.next_proto_id,
+		       sizeof(ipv4_spec->hdr.next_proto_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.hdr_checksum);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.hdr_checksum,
+		       sizeof(ipv4_spec->hdr.hdr_checksum));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.src_addr,
+		       sizeof(ipv4_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.dst_addr,
+		       sizeof(ipv4_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV4_NUM;
+	}
+
+	if (ipv4_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.version_ihl,
+		       sizeof(ipv4_mask->hdr.version_ihl));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.type_of_service,
+		       sizeof(ipv4_mask->hdr.type_of_service));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.total_length,
+		       sizeof(ipv4_mask->hdr.total_length));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.packet_id,
+		       sizeof(ipv4_mask->hdr.packet_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.fragment_offset,
+		       sizeof(ipv4_mask->hdr.fragment_offset));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.time_to_live,
+		       sizeof(ipv4_mask->hdr.time_to_live));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.next_proto_id,
+		       sizeof(ipv4_mask->hdr.next_proto_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.hdr_checksum,
+		       sizeof(ipv4_mask->hdr.hdr_checksum));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.src_addr,
+		       sizeof(ipv4_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.dst_addr,
+		       sizeof(ipv4_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* Number of ipv4 header elements */
+
+	/* Set the ipv4 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv6_spec = item->spec;
+	ipv6_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error: 3'rd L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv6_spec) {
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.vtc_flow);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.vtc_flow,
+		       sizeof(ipv6_spec->hdr.vtc_flow));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.payload_len);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.payload_len,
+		       sizeof(ipv6_spec->hdr.payload_len));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.proto);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.proto,
+		       sizeof(ipv6_spec->hdr.proto));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.hop_limits);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.hop_limits,
+		       sizeof(ipv6_spec->hdr.hop_limits));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.src_addr,
+		       sizeof(ipv6_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.dst_addr,
+		       sizeof(ipv6_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV6_NUM;
+	}
+
+	if (ipv6_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.vtc_flow,
+		       sizeof(ipv6_mask->hdr.vtc_flow));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.payload_len,
+		       sizeof(ipv6_mask->hdr.payload_len));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.proto,
+		       sizeof(ipv6_mask->hdr.proto));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.hop_limits,
+		       sizeof(ipv6_mask->hdr.hop_limits));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.src_addr,
+		       sizeof(ipv6_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.dst_addr,
+		       sizeof(ipv6_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* add number of ipv6 header elements */
+
+	/* Set the ipv6 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	udp_spec = item->spec;
+	udp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Err:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (udp_spec) {
+		hdr_field[idx].size = sizeof(udp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.src_port,
+		       sizeof(udp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dst_port,
+		       sizeof(udp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_len);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_len,
+		       sizeof(udp_spec->hdr.dgram_len));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_cksum);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_cksum,
+		       sizeof(udp_spec->hdr.dgram_cksum));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_UDP_NUM;
+	}
+
+	if (udp_mask) {
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.src_port,
+		       sizeof(udp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dst_port,
+		       sizeof(udp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_len,
+		       sizeof(udp_mask->hdr.dgram_len));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_cksum,
+		       sizeof(udp_mask->hdr.dgram_cksum));
+	}
+	*field_idx = idx; /* Add number of UDP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	tcp_spec = item->spec;
+	tcp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (tcp_spec) {
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.src_port,
+		       sizeof(tcp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.dst_port,
+		       sizeof(tcp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.sent_seq);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.sent_seq,
+		       sizeof(tcp_spec->hdr.sent_seq));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.recv_ack);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.recv_ack,
+		       sizeof(tcp_spec->hdr.recv_ack));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.data_off);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.data_off,
+		       sizeof(tcp_spec->hdr.data_off));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_flags);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_flags,
+		       sizeof(tcp_spec->hdr.tcp_flags));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.rx_win);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.rx_win,
+		       sizeof(tcp_spec->hdr.rx_win));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.cksum);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.cksum,
+		       sizeof(tcp_spec->hdr.cksum));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_urp);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_urp,
+		       sizeof(tcp_spec->hdr.tcp_urp));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_TCP_NUM;
+	}
+
+	if (tcp_mask) {
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.src_port,
+		       sizeof(tcp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.dst_port,
+		       sizeof(tcp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.sent_seq,
+		       sizeof(tcp_mask->hdr.sent_seq));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.recv_ack,
+		       sizeof(tcp_mask->hdr.recv_ack));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.data_off,
+		       sizeof(tcp_mask->hdr.data_off));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_flags,
+		       sizeof(tcp_mask->hdr.tcp_flags));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.rx_win,
+		       sizeof(tcp_mask->hdr.rx_win));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.cksum,
+		       sizeof(tcp_mask->hdr.cksum));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_urp,
+		       sizeof(tcp_mask->hdr.tcp_urp));
+	}
+	*field_idx = idx; /* add number of TCP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
+			  struct ulp_rte_hdr_bitmap *hdrbitmap,
+			  struct ulp_rte_hdr_field *hdr_field,
+			  uint32_t *field_idx,
+			  uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	vxlan_spec = item->spec;
+	vxlan_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
+	 * header fields
+	 */
+	if (vxlan_spec) {
+		hdr_field[idx].size = sizeof(vxlan_spec->flags);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->flags,
+		       sizeof(vxlan_spec->flags));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd0);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd0,
+		       sizeof(vxlan_spec->rsvd0));
+		hdr_field[idx].size = sizeof(vxlan_spec->vni);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->vni,
+		       sizeof(vxlan_spec->vni));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd1);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd1,
+		       sizeof(vxlan_spec->rsvd1));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_VXLAN_NUM;
+	}
+
+	if (vxlan_mask) {
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->flags,
+		       sizeof(vxlan_mask->flags));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd0,
+		       sizeof(vxlan_mask->rsvd0));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->vni,
+		       sizeof(vxlan_mask->vni));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd1,
+		       sizeof(vxlan_mask->rsvd1));
+	}
+	*field_idx = idx; /* Add number of vxlan header elements */
+
+	/* Update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(hdrbitmap->bits, BNXT_ULP_HDR_BIT_T_VXLAN);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item void Header */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
+			 struct ulp_rte_hdr_bitmap *hdr_bit __rte_unused,
+			 struct ulp_rte_hdr_field *hdr_field __rte_unused,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
new file mode 100644
index 0000000..3a7845d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_RTE_PARSER_H_
+#define _ULP_RTE_PARSER_H_
+
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field  *hdr_field);
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
+			    struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			    struct ulp_rte_hdr_field	*hdr_field,
+			    uint32_t			*field_idx,
+			    uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
+			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			     struct ulp_rte_hdr_field	*hdr_field,
+			     uint32_t			*field_idx,
+			     uint32_t			*vlan_idx);
+
+/* Function to handle the RTE item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header. */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item	*item,
+			  struct ulp_rte_hdr_bitmap	*hdrbitmap,
+			  struct ulp_rte_hdr_field	*hdr_field,
+			  uint32_t			*field_idx,
+			  uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item void Header. */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+#endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 5981c74..1d15563 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,8 @@
 #include "ulp_template_db.h"
 #include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
+#include "ulp_rte_parser.h"
+
 uint32_t ulp_act_prop_map_table[] = {
 	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
 		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ,
@@ -108,6 +110,201 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
+	[RTE_FLOW_ITEM_TYPE_END] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_END,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VOID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_void_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_INVERT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ANY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_pf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PHY_PORT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_phy_port_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PORT_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_port_id_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_RAW] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_eth_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv4_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv6_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_UDP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_udp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_TCP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_tcp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_SCTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vxlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_E_TAG] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NVGRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MPLS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_FUZZY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPU] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ESP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6_EXT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NA] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MARK] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_META] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP_PSC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOES] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOED] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NSH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IGMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_AH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_HIGIG2] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	}
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index f4850bf..906b542 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -115,6 +115,13 @@ enum bnxt_ulp_hdr_field {
 	BNXT_ULP_HDR_FIELD_LAST = 4
 };
 
+enum bnxt_ulp_hdr_type {
+	BNXT_ULP_HDR_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_HDR_TYPE_SUPPORTED = 1,
+	BNXT_ULP_HDR_TYPE_END = 2,
+	BNXT_ULP_HDR_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0e811ec..0699634 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,18 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Number of fields for each protocol */
+#define BNXT_ULP_PROTO_HDR_SVIF_NUM	1
+#define BNXT_ULP_PROTO_HDR_ETH_NUM	3
+#define BNXT_ULP_PROTO_HDR_S_VLAN_NUM	3
+#define BNXT_ULP_PROTO_HDR_VLAN_NUM	6
+#define BNXT_ULP_PROTO_HDR_IPV4_NUM	10
+#define BNXT_ULP_PROTO_HDR_IPV6_NUM	6
+#define BNXT_ULP_PROTO_HDR_UDP_NUM	4
+#define BNXT_ULP_PROTO_HDR_TCP_NUM	9
+#define BNXT_ULP_PROTO_HDR_VXLAN_NUM	4
+#define BNXT_ULP_PROTO_HDR_MAX		128
+
 struct ulp_rte_hdr_bitmap {
 	uint64_t	bits;
 };
@@ -29,6 +41,20 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+/* Flow Parser Header Information Structure */
+struct bnxt_ulp_rte_hdr_info {
+	enum bnxt_ulp_hdr_type					hdr_type;
+	/* Flow Parser Protocol Header Function Prototype */
+	int (*proto_hdr_func)(const struct rte_flow_item	*item_list,
+			      struct ulp_rte_hdr_bitmap		*hdr_bitmap,
+			      struct ulp_rte_hdr_field		*hdr_field,
+			      uint32_t				*field_idx,
+			      uint32_t				*vlan_idx);
+};
+
+/* Flow Parser Header Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_hdr_info	ulp_hdr_info[];
+
 struct bnxt_ulp_matcher_field_info {
 	enum bnxt_ulp_fmf_mask	mask_opcode;
 	enum bnxt_ulp_fmf_spec	spec_opcode;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 25/33] net/bnxt: add support for rte flow action parsing
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (23 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 24/33] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 26/33] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
                   ` (8 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Registers a callback handler for each rte_flow_action type, if
   it is supported
2. Iterates through each rte_flow_action till RTE_FLOW_ACTION_TYPE_END
3. Invokes the action call back handler
4. Each action call back handler will populate the respective fields in
   act_details & act_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   7 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 441 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      |  85 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 199 ++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  13 +
 6 files changed, 751 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index e4ebfc5..f417579 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -31,6 +31,13 @@ enum bnxt_tf_rc {
 	BNXT_TF_RC_SUCCESS	= 0
 };
 
+/* eth IPv4 Type */
+enum bnxt_ulp_eth_ip_type {
+	BNXT_ULP_ETH_IPV4 = 4,
+	BNXT_ULP_ETH_IPV6 = 5,
+	BNXT_ULP_MAX_ETH_IP_TYPE = 0
+};
+
 /* ulp direction Type */
 enum ulp_direction_type {
 	ULP_DIR_INGRESS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 3ffdcbd..7a31b43 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -30,6 +30,21 @@ static inline void ulp_util_field_int_write(uint8_t *buffer,
 	memcpy(buffer, &temp_val, sizeof(uint32_t));
 }
 
+/* Utility function to skip the void items. */
+static inline int32_t
+ulp_rte_item_skip_void(const struct rte_flow_item **item, uint32_t increment)
+{
+	if (!*item)
+		return 0;
+	if (increment)
+		(*item)++;
+	while ((*item) && (*item)->type == RTE_FLOW_ITEM_TYPE_VOID)
+		(*item)++;
+	if (*item)
+		return 1;
+	return 0;
+}
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -73,6 +88,45 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 	return BNXT_TF_RC_SUCCESS;
 }
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
+			      struct ulp_rte_act_bitmap *act_bitmap,
+			      struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action *action_item = actions;
+	struct bnxt_ulp_rte_act_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (action_item && action_item->type != RTE_FLOW_ACTION_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_act_info[action_item->type];
+		if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support act %u\n",
+				    action_item->type);
+			return BNXT_TF_RC_ERROR;
+		} else if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_act_func) {
+				if (hdr_info->proto_act_func(action_item,
+							     act_bitmap,
+							     act_prop) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		action_item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 static int32_t
 ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
@@ -765,3 +819,390 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
 {
 	return BNXT_TF_RC_SUCCESS;
 }
+
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act __rte_unused,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action *action_item,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_mark *mark;
+	uint32_t mark_id = 0;
+
+	mark = action_item->conf;
+	if (mark) {
+		mark_id = tfp_cpu_to_be_32(mark->id);
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       &mark_id, BNXT_ULP_ACT_PROP_SZ_MARK);
+
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_MARK);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Mark arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action *action_item,
+			struct ulp_rte_act_bitmap *act,
+			struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	const struct rte_flow_action_rss *rss;
+
+	rss = action_item->conf;
+	if (rss) {
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_RSS);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: RSS arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *ap)
+{
+	const struct rte_flow_action_vxlan_encap *vxlan_encap;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv6 *ipv6_spec;
+	struct rte_flow_item_vxlan vxlan_spec;
+	uint32_t vlan_num = 0, vlan_size = 0;
+	uint32_t ip_size = 0, ip_type = 0;
+	uint32_t vxlan_size = 0;
+	uint8_t *buff;
+	/* IP header per byte - ver/hlen, TOS, ID, ID, FRAG, FRAG, TTL, PROTO */
+	const uint8_t	def_ipv4_hdr[] = {0x45, 0x00, 0x00, 0x01, 0x00,
+				    0x00, 0x40, 0x11};
+
+	vxlan_encap = action_item->conf;
+	if (!vxlan_encap) {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan_encap arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	item = vxlan_encap->definition;
+	if (!item) {
+		BNXT_TF_DBG(ERR, "Parse Error: definition arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!ulp_rte_item_skip_void(&item, 0))
+		return BNXT_TF_RC_ERROR;
+
+	/* must have ethernet header */
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		BNXT_TF_DBG(ERR, "Parse Error:vxlan encap does not have eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	eth_spec = item->spec;
+	buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC];
+	ulp_encap_buffer_copy(buff,
+			      eth_spec->dst.addr_bytes,
+			      BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC);
+
+	/* Goto the next item */
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* May have vlan header */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG];
+		ulp_encap_buffer_copy(buff,
+				      item->spec,
+				      sizeof(struct rte_flow_item_vlan));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+
+	/* may have two vlan headers */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG +
+		       sizeof(struct rte_flow_item_vlan)],
+		       item->spec,
+		       sizeof(struct rte_flow_item_vlan));
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+	/* Update the vlan count and size of more than one */
+	if (vlan_num) {
+		vlan_size = vlan_num * sizeof(struct rte_flow_item_vlan);
+		vlan_num = tfp_cpu_to_be_32(vlan_num);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM],
+		       &vlan_num,
+		       sizeof(uint32_t));
+		vlan_size = tfp_cpu_to_be_32(vlan_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ],
+		       &vlan_size,
+		       sizeof(uint32_t));
+	}
+
+	/* L3 must be IPv4, IPv6 */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		ipv4_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV4_SIZE;
+
+		/* copy the ipv4 details */
+		if (ulp_buffer_is_empty(&ipv4_spec->hdr.version_ihl,
+					BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS)) {
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      def_ipv4_hdr,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		} else {
+			const uint8_t *tmp_buff;
+
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      &ipv4_spec->hdr.version_ihl,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS);
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+			     BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS];
+			tmp_buff = (const uint8_t *)&ipv4_spec->hdr.packet_id;
+			ulp_encap_buffer_copy(buff,
+					      tmp_buff,
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		}
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+		    BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+		    BNXT_ULP_ENCAP_IPV4_ID_PROTO];
+		ulp_encap_buffer_copy(buff,
+				      (const uint8_t *)&ipv4_spec->hdr.dst_addr,
+				      BNXT_ULP_ENCAP_IPV4_DEST_IP);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		/* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV4);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		ipv6_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV6_SIZE;
+
+		/* copy the ipv4 details */
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP],
+		       ipv6_spec, BNXT_ULP_ENCAP_IPV6_SIZE);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		 /* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV6);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan Encap expects L3 hdr\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* L4 is UDP */
+	if (item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have udp\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	/* copy the udp details */
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP],
+			      item->spec, BNXT_ULP_ENCAP_UDP_SIZE);
+
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* Finally VXLAN */
+	if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have vni\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	vxlan_size = sizeof(struct rte_flow_item_vxlan);
+	/* copy the vxlan details */
+	memcpy(&vxlan_spec, item->spec, vxlan_size);
+	vxlan_spec.flags = 0x08;
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN],
+			      (const uint8_t *)&vxlan_spec,
+			      vxlan_size);
+	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
+	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
+	       &vxlan_size, sizeof(uint32_t));
+
+	/*update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_ENCAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action *action_item
+				__rte_unused,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_DECAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* Update the hdr_bitmap with drop */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_DROP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action *action_item,
+			  struct ulp_rte_act_bitmap *act,
+			  struct ulp_rte_act_prop *act_prop __rte_unused)
+
+{
+	const struct rte_flow_action_count *act_count;
+
+	act_count = action_item->conf;
+	if (act_count) {
+		if (act_count->shared) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:Shared count not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_COUNT],
+		       &act_count->id,
+		       BNXT_ULP_ACT_PROP_SZ_COUNT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_COUNT);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	uint8_t *svif_buf;
+	uint8_t *vnic_buffer;
+	uint32_t svif;
+
+	/* Update the hdr_bitmap with vnic bit */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+
+	/* copy the PF of the current device into VNIC Property */
+	svif_buf = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_read(svif_buf, &svif);
+	vnic_buffer = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_write(vnic_buffer, svif);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_vf *vf_action;
+
+	vf_action = action_item->conf;
+	if (vf_action) {
+		if (vf_action->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:VF Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using VF conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &vf_action->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
+			    struct ulp_rte_act_bitmap *act,
+			    struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_port_id *port_id;
+
+	port_id = act_item->conf;
+	if (port_id) {
+		if (port_id->original) {
+			BNXT_TF_DBG(ERR,
+				    "ParseErr:Portid Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using port conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &port_id->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
+			     struct ulp_rte_act_bitmap *act,
+			     struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_phy_port *phy_port;
+
+	phy_port = action_item->conf;
+	if (phy_port) {
+		if (phy_port->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Err:Port Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &phy_port->index,
+		       BNXT_ULP_ACT_PROP_SZ_VPORT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VPORT);
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 3a7845d..0ab43d2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -12,6 +12,14 @@
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+/* defines to be used in the tunnel header parsing */
+#define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS	2
+#define BNXT_ULP_ENCAP_IPV4_ID_PROTO		6
+#define BNXT_ULP_ENCAP_IPV4_DEST_IP		4
+#define BNXT_ULP_ENCAP_IPV4_SIZE		12
+#define BNXT_ULP_ENCAP_IPV6_SIZE		8
+#define BNXT_ULP_ENCAP_UDP_SIZE			4
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -21,6 +29,15 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
 			      struct ulp_rte_hdr_field  *hdr_field);
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action	actions[],
+			      struct ulp_rte_act_bitmap		*act_bitmap,
+			      struct ulp_rte_act_prop		*act_prop);
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 int32_t
 ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
@@ -45,7 +62,7 @@ ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
 			    uint32_t			*field_idx,
 			    uint32_t			*vlan_idx);
 
-/* Function to handle the parsing of RTE Flow item port id Header. */
+/* Function to handle the parsing of RTE Flow item port Header. */
 int32_t
 ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
 			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
@@ -117,4 +134,70 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
 			 uint32_t			*field_idx,
 			 uint32_t			*vlan_idx);
 
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action	*action_item,
+			struct ulp_rte_act_bitmap	*act,
+			struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action	*action_item,
+			  struct ulp_rte_act_bitmap	*act,
+			  struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action	*act_item,
+			    struct ulp_rte_act_bitmap		*act,
+			    struct ulp_rte_act_prop		*act_p);
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action	*action_item,
+			     struct ulp_rte_act_bitmap		*act,
+			     struct ulp_rte_act_prop		*act_prop);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 1d15563..89572c7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -96,6 +96,205 @@ uint32_t ulp_act_prop_map_table[] = {
 		BNXT_ULP_ACT_PROP_SZ_LAST
 };
 
+struct bnxt_ulp_rte_act_info ulp_act_info[] = {
+	[RTE_FLOW_ACTION_TYPE_END] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_END,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VOID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_void_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PASSTHRU] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_JUMP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MARK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_mark_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_FLAG] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_QUEUE] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DROP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_drop_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_COUNT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_count_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_RSS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_rss_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_pf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PHY_PORT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_phy_port_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PORT_ID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_port_id_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_METER] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SECURITY] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_OUT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_IN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_encap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_decap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MAC_SWAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	}
+};
+
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
 		.global_fid_enable       = BNXT_ULP_SYM_YES,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 906b542..dfab266 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -74,6 +74,13 @@ enum bnxt_ulp_hdr_bit {
 	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
 };
 
+enum bnxt_ulp_act_type {
+	BNXT_ULP_ACT_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_ACT_TYPE_SUPPORTED = 1,
+	BNXT_ULP_ACT_TYPE_END = 2,
+	BNXT_ULP_ACT_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0699634..47c0dd8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -72,6 +72,19 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Parser Action Information Structure */
+struct bnxt_ulp_rte_act_info {
+	enum bnxt_ulp_act_type					act_type;
+	/* Flow Parser Protocol Action Function Prototype */
+	int32_t (*proto_act_func)
+		(const struct rte_flow_action			*action_item,
+		struct ulp_rte_act_bitmap			*act_bitmap,
+		struct ulp_rte_act_prop				*act_prop);
+};
+
+/* Flow Parser Action Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_act_info	ulp_act_info[];
+
 /* Flow Matcher structures */
 struct bnxt_ulp_header_match_info {
 	struct ulp_rte_hdr_bitmap		hdr_bitmap;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 26/33] net/bnxt: add support for rte flow create driver hook
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (24 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 25/33] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 27/33] net/bnxt: add support for rte flow validate " Venkat Duvvuru
                   ` (7 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, calls ulp_mapper_flow_create to program
   key & action tables

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 177 ++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 29d45e7..bce5632 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_matcher.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/ulp_rte_parser.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_ulp/bnxt_ulp_flow.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
new file mode 100644
index 0000000..6402dd3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_matcher.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+#include <rte_malloc.h>
+
+static int32_t
+bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item pattern[],
+			    const struct rte_flow_action actions[],
+			    struct rte_flow_error *error)
+{
+	/* Perform the validation of the arguments for null */
+	if (!error)
+		return BNXT_TF_RC_ERROR;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL,
+				   "NULL pattern.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL,
+				   "NULL action.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL,
+				   "NULL attribute.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (attr->egress && attr->ingress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   attr,
+				   "EGRESS AND INGRESS UNSUPPORTED");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to create the rte flow. */
+static struct rte_flow *
+bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
+		     const struct rte_flow_attr		*attr,
+		     const struct rte_flow_item		pattern[],
+		     const struct rte_flow_action	actions[],
+		     struct rte_flow_error		*error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	uint32_t app_priority;
+	int ret;
+	struct bnxt_ulp_context *ulp_ctx = NULL;
+	uint32_t vnic;
+	uint8_t svif;
+	struct rte_flow *flow_id;
+	uint32_t fid;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return NULL;
+	}
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return NULL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	svif = bnxt_get_svif(dev->data->port_id, false);
+	BNXT_TF_DBG(ERR, "SVIF for port[%d][port]=0x%08x\n",
+		    dev->data->port_id, svif);
+
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec[0] = svif;
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask[0] = -1;
+	ULP_BITMAP_SET(hdr_bitmap.bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	/*
+	 * VNIC is being pushed as 32bit and the pop will take care of
+	 * proper size
+	 */
+	vnic = (uint32_t)bnxt_get_vnic_id(dev->data->port_id);
+	vnic = htonl(vnic);
+	rte_memcpy(&act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		   &vnic, BNXT_ULP_ACT_PROP_SZ_VNIC);
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	app_priority = attr->priority;
+	/* call the ulp mapper to create the flow in the hardware */
+	ret = ulp_mapper_flow_create(ulp_ctx,
+				     app_priority,
+				     &hdr_bitmap,
+				     hdr_field,
+				     &act_bitmap,
+				     &act_prop,
+				     class_id,
+				     act_tmpl,
+				     &fid);
+	if (!ret) {
+		flow_id = (struct rte_flow *)((uintptr_t)fid);
+		return flow_id;
+	}
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to create flow.");
+	return NULL;
+}
+
+const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
+	.validate = NULL,
+	.create = bnxt_ulp_flow_create,
+	.destroy = NULL,
+	.flush = NULL,
+	.query = NULL,
+	.isolate = NULL
+};
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 27/33] net/bnxt: add support for rte flow validate driver hook
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (25 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 26/33] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 28/33] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
                   ` (6 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, returns success otherwise failure

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 67 ++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6402dd3..490b2ba 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -167,8 +167,73 @@ bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
 	return NULL;
 }
 
+/* Function to validate the rte flow. */
+static int
+bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
+		       const struct rte_flow_attr *attr,
+		       const struct rte_flow_item pattern[],
+		       const struct rte_flow_action actions[],
+		       struct rte_flow_error *error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	int ret;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return -EINVAL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* all good return success */
+	return ret;
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to validate flow.");
+	return -EINVAL;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
-	.validate = NULL,
+	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = NULL,
 	.flush = NULL,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 28/33] net/bnxt: add support for rte flow destroy driver hook
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (26 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 27/33] net/bnxt: add support for rte flow validate " Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 29/33] net/bnxt: add support for rte flow flush " Venkat Duvvuru
                   ` (5 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the flow associated with the flow id from the flow table
3. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with that flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 490b2ba..35099a3 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -232,10 +232,40 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
 	return -EINVAL;
 }
 
+/* Function to destroy the rte flow. */
+static int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
+		      struct rte_flow *flow,
+		      struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	uint32_t fid;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+		return -EINVAL;
+	}
+
+	fid = (uint32_t)(uintptr_t)flow;
+
+	ret = ulp_mapper_flow_destroy(ulp_ctx, fid);
+	if (ret)
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
-	.destroy = NULL,
+	.destroy = bnxt_ulp_flow_destroy,
 	.flush = NULL,
 	.query = NULL,
 	.isolate = NULL
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 29/33] net/bnxt: add support for rte flow flush driver hook
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (27 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 28/33] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 30/33] net/bnxt: register tf rte flow ops Venkat Duvvuru
                   ` (4 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the rte_flow table associated with this session
3. Iterates through all the flows in the flow table
4. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with each flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c      |  3 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 33 +++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c   | 69 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h   | 11 ++++++
 4 files changed, 115 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 3795c6d..56e08f2 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -517,6 +517,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	if (!session)
 		return;
 
+	/* clean up regular flows */
+	ulp_flow_db_flush_flows(&bp->ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+
 	/* cleanup the eem table scope */
 	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 35099a3..4958895 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -262,11 +262,42 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Function to destroy the rte flows. */
+static int32_t
+bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
+		    struct rte_flow_error *error)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int32_t ret;
+	struct bnxt *bp;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+		return -EINVAL;
+	}
+	bp = eth_dev->data->dev_private;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return 0;
+
+	ret = ulp_flow_db_flush_flows(ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
-	.flush = NULL,
+	.flush = bnxt_ulp_flow_flush,
 	.query = NULL,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 76ec856..68ba6d4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -555,3 +555,72 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/** Get the flow database entry iteratively
+ *
+ * flow_tbl [in] Ptr to flow table
+ * fid [in/out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+static int32_t
+ulp_flow_db_next_entry_get(struct bnxt_ulp_flow_tbl	*flowtbl,
+			   uint32_t			*fid)
+{
+	uint32_t	lfid = *fid;
+	uint32_t	idx;
+	uint64_t	bs;
+
+	do {
+		lfid++;
+		if (lfid >= flowtbl->num_flows)
+			return -ENOENT;
+		idx = lfid / ULP_INDEX_BITMAP_SIZE;
+		while (!(bs = flowtbl->active_flow_tbl[idx])) {
+			idx++;
+			if ((idx * ULP_INDEX_BITMAP_SIZE) >= flowtbl->num_flows)
+				return -ENOENT;
+		}
+		lfid = (idx * ULP_INDEX_BITMAP_SIZE) + __builtin_clzl(bs);
+		if (*fid >= lfid) {
+			BNXT_TF_DBG(ERR, "Flow Database is corrupt\n");
+			return -ENOENT;
+		}
+	} while (!ulp_flow_db_active_flow_is_set(flowtbl, lfid));
+
+	/* all good, return success */
+	*fid = lfid;
+	return 0;
+}
+
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx)
+{
+	uint32_t			fid = 0;
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid Argument\n");
+		return -EINVAL;
+	}
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Flow database not found\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[idx];
+	while (!ulp_flow_db_next_entry_get(flow_tbl, &fid))
+		(void)ulp_mapper_resources_free(ulp_ctx, fid, idx);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index eb5effa..5435415 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -142,4 +142,15 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 			     enum bnxt_ulp_flow_db_tables	tbl_idx,
 			     uint32_t				fid);
 
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx);
+
 #endif /* _ULP_FLOW_DB_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 30/33] net/bnxt: register tf rte flow ops
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (28 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 29/33] net/bnxt: add support for rte flow flush " Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 31/33] net/bnxt: disable vector mode when BNXT TRUFLOW is enabled Venkat Duvvuru
                   ` (3 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Register bnxt_ulp_rte_flow_ops when RTE_LIBRTE_BNXT_TRUFLOW is
enabled.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 1 +
 drivers/net/bnxt/bnxt_ethdev.c | 4 ++++
 2 files changed, 5 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3cb8ba3..98dc575 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -734,6 +734,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 #ifdef RTE_LIBRTE_BNXT_TRUFLOW
+extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2064f21..7f73fe6 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3300,7 +3300,11 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_GENERIC:
 		if (filter_op != RTE_ETH_FILTER_GET)
 			return -EINVAL;
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+		*(const void **)arg = &bnxt_ulp_rte_flow_ops;
+#else
 		*(const void **)arg = &bnxt_flow_ops;
+#endif
 		break;
 	default:
 		PMD_DRV_LOG(ERR,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 31/33] net/bnxt: disable vector mode when BNXT TRUFLOW is enabled
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (29 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 30/33] net/bnxt: register tf rte flow ops Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 32/33] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
                   ` (2 subsequent siblings)
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

clear BNXT_FLAG_RX_VECTOR_PKT_MODE bit in bp->flags when
RTE_LIBRTE_BNXT_TRUFLOW is enabled.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 7f73fe6..faf76c6 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -758,6 +758,7 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
 
+#ifndef RTE_LIBRTE_BNXT_TRUFLOW
 #ifdef RTE_ARCH_X86
 #ifndef RTE_LIBRTE_IEEE1588
 	/*
@@ -790,6 +791,7 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 		    eth_dev->data->dev_conf.rxmode.offloads);
 #endif
 #endif
+#endif
 	bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE;
 	return bnxt_recv_pkts;
 }
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 32/33] net/bnxt: add support for injecting mark into packet’s mbuf
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (30 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 31/33] net/bnxt: disable vector mode when BNXT TRUFLOW is enabled Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 33/33] config: introduce BNXT TRUFLOW config flag Venkat Duvvuru
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

When a flow is offloaded with MARK action (RTE_FLOW_ACTION_TYPE_MARK),
each packet of that flow will have metadata set in its completion.
This metadata will be used to fetch an index into a mark table where
the actual MARK for that flow is stored. Fetch the MARK from the mark
table and inject it into packet’s mbuf.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c            | 156 ++++++++++++++++++++++++---------
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  55 +++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 ++++
 3 files changed, 186 insertions(+), 43 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index bef9720..e3c677e 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -19,6 +19,10 @@
 #ifdef RTE_LIBRTE_IEEE1588
 #include "bnxt_hwrm.h"
 #endif
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+#include <bnxt_tf_common.h>
+#include <ulp_mark_mgr.h>
+#endif
 
 /*
  * RX Ring handling
@@ -399,6 +403,112 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+static void
+bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
+			  struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code;
+	uint32_t meta_fmt;
+	uint32_t meta;
+	uint32_t eem = 0;
+	uint32_t mark_id;
+	uint32_t flags2;
+	int rc;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	flags2 = rte_le_to_cpu_32(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			    BNXT_CFA_META_FMT_SHFT;
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+
+		eem = meta_fmt == BNXT_CFA_META_FMT_EEM;
+
+		/* For EEM flows, The first part of cfa_code is 16 bits.
+		 * The second part is embedded in the
+		 * metadata field from bit 19 onwards. The driver needs to
+		 * ignore the first 19 bits of metadata and use the next 12
+		 * bits as higher 12 bits of cfa_code.
+		 */
+		if (eem)
+			cfa_code |= meta << BNXT_CFA_CODE_META_SHIFT;
+	}
+
+	if (cfa_code) {
+		mbuf->hash.fdir.hi = 0;
+		mbuf->hash.fdir.id = 0;
+		if (eem)
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, true,
+						  cfa_code, &mark_id);
+		else
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, false,
+						  cfa_code, &mark_id);
+		/* If the above fails, simply return and don't add the mark to
+		 * mbuf
+		 */
+		if (rc)
+			return;
+
+		mbuf->hash.fdir.hi	= mark_id;
+		mbuf->udata64		= (cfa_code & 0xffffffffull) << 32;
+		mbuf->hash.fdir.id	= rxcmp1->cfa_code;
+		mbuf->ol_flags		|= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	}
+}
+
+#else
+void bnxt_set_mark_in_mbuf(struct bnxt *bp,
+			   struct rx_pkt_cmpl_hi *rxcmp1,
+			   struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code = 0;
+	uint8_t meta_fmt = 0;
+	uint16_t flags2 = 0;
+	uint32_t meta =  0;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	if (!cfa_code)
+		return;
+
+	if (cfa_code && !bp->mark_table[cfa_code].valid)
+		return;
+
+	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			   BNXT_CFA_META_FMT_SHFT;
+
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+	}
+
+	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
+	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+}
+#endif
+
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
@@ -489,8 +599,11 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		mbuf->hash.rss = rxcmp->rss_hash;
 		mbuf->ol_flags |= PKT_RX_RSS_HASH;
 	}
-
+#ifdef RTE_LIBRTE_BNXT_TRUFLOW
+	bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+#else
 	bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+#endif
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
@@ -896,44 +1009,3 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 
 	return 0;
 }
-
-void bnxt_set_mark_in_mbuf(struct bnxt *bp,
-			   struct rx_pkt_cmpl_hi *rxcmp1,
-			   struct rte_mbuf *mbuf)
-{
-	uint32_t cfa_code = 0;
-	uint8_t meta_fmt =  0;
-	uint16_t flags2 = 0;
-	uint32_t meta =  0;
-
-	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
-	if (!cfa_code)
-		return;
-
-	if (cfa_code && !bp->mark_table[cfa_code].valid)
-		return;
-
-	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
-	meta = rte_le_to_cpu_32(rxcmp1->metadata);
-	if (meta) {
-		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
-
-		/*
-		 * The flags field holds extra bits of info from [6:4]
-		 * which indicate if the flow is in TCAM or EM or EEM
-		 */
-		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
-			   BNXT_CFA_META_FMT_SHFT;
-
-		/*
-		 * meta_fmt == 4 => 'b100 => 'b10x => EM.
-		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
-		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
-		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
-		 */
-		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
-	}
-
-	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 566668e..ad83531 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -58,7 +58,7 @@ ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
 	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
 
 	if (is_gfid) {
-		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+		BNXT_TF_DBG(DEBUG, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
 
 		mtbl->gfid_tbl[idx].mark_id = mark;
 		mtbl->gfid_tbl[idx].valid = true;
@@ -176,6 +176,59 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 }
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+	uint32_t idx = 0;
+
+	if (!ctxt || !mark)
+		return -EINVAL;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark Table\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get GFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->gfid_tbl[idx].mark_id);
+
+		*mark = mtbl->gfid_tbl[idx].mark_id;
+	} else {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get LFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->lfid_tbl[idx].mark_id);
+
+		*mark = mtbl->lfid_tbl[idx].mark_id;
+	}
+
+	return 0;
+}
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index f0d1515..0f8a5e5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -55,6 +55,24 @@ int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark);
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH 33/33] config: introduce BNXT TRUFLOW config flag
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (31 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 32/33] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
@ 2020-03-17 15:38 ` Venkat Duvvuru
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
  33 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-03-17 15:38 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

BNXT_TRUFLOW flag controls TRUFLOW feature in Broadcom adapters.
This flag is disabled by default.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 config/common_base | 1 +
 1 file changed, 1 insertion(+)

diff --git a/config/common_base b/config/common_base
index c31175f..c8415ff 100644
--- a/config/common_base
+++ b/config/common_base
@@ -219,6 +219,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n
 # Compile burst-oriented Broadcom BNXT PMD driver
 #
 CONFIG_RTE_LIBRTE_BNXT_PMD=y
+CONFIG_RTE_LIBRTE_BNXT_TRUFLOW=n
 
 #
 # Compile burst-oriented Chelsio Terminator (CXGBE) PMD
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management
  2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
                   ` (32 preceding siblings ...)
  2020-03-17 15:38 ` [dpdk-dev] [PATCH 33/33] config: introduce BNXT TRUFLOW config flag Venkat Duvvuru
@ 2020-04-13 19:39 ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
                     ` (35 more replies)
  33 siblings, 36 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This patchset introduces a new mechanism to allow host-memory based
flow table management. This should allow higher flow scalability
than what is currently supported. This new approach also defines a
new rte_flow parser, and mapper which currently supports basic packet
classification in receive path. The patchset uses a newly implemented
control-plane firmware interface which optimizes flow insertions and
deletions.

This is a baseline patchset with limited scale. Follow on patches will
add support for more protocol headers, rte_flow attributes, actions
and such.

Currently the code path is disabled by default and can be enabled
using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.

v1==>v2
=======
1. Meson build support for new files
2. Devargs support for this feature
3. Fixed compilation error in ulp_mapper.c

Ajit Kumar Khaparde (1):
  net/bnxt: add updated dpdk hsi structure

Farah Smith (2):
  net/bnxt: add tf core identifier support
  net/bnxt: add tf core table scope support

Kishore Padmanabha (8):
  net/bnxt: match rte flow items with flow template patterns
  net/bnxt: match rte flow actions with flow template actions
  net/bnxt: add support for rte flow item parsing
  net/bnxt: add support for rte flow action parsing
  net/bnxt: add support for rte flow create driver hook
  net/bnxt: add support for rte flow validate driver hook
  net/bnxt: add support for rte flow destroy driver hook
  net/bnxt: add support for rte flow flush driver hook

Michael Wildt (4):
  net/bnxt: add initial tf core session open
  net/bnxt: add initial tf core session close support
  net/bnxt: add tf core session sram functions
  net/bnxt: add resource manager functionality

Mike Baucom (5):
  net/bnxt: add helper functions for blob/regfile ops
  net/bnxt: add support to process action tables
  net/bnxt: add support to process key tables
  net/bnxt: add support to free key and action tables
  net/bnxt: add support to alloc and program key and act tbls

Pete Spreadborough (2):
  net/bnxt: add truflow message handlers
  net/bnxt: add EM/EEM functionality

Randy Schacher (1):
  net/bnxt: update hwrm prep to use ptr

Shahaji Bhosle (2):
  net/bnxt: add initial tf core resource mgmt support
  net/bnxt: add tf core TCAM support

Venkat Duvvuru (9):
  net/bnxt: fetch SVIF information from the firmware
  net/bnxt: fetch vnic info from DPDK port
  net/bnxt: add devargs parameter for host memory based TRUFLOW feature
  net/bnxt: add support for ULP session manager init
  net/bnxt: add support for ULP session manager cleanup
  net/bnxt: register tf rte flow ops
  net/bnxt: disable vector mode when host based TRUFLOW is enabled
  net/bnxt: add support for injecting mark into packet’s mbuf
  net/bnxt: enable meson build on truflow code

 drivers/net/bnxt/Makefile                       |   24 +
 drivers/net/bnxt/bnxt.h                         |   21 +-
 drivers/net/bnxt/bnxt_ethdev.c                  |  118 +-
 drivers/net/bnxt/bnxt_hwrm.c                    |  317 +-
 drivers/net/bnxt/bnxt_hwrm.h                    |   19 +
 drivers/net/bnxt/bnxt_rxr.c                     |  153 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h          | 3786 ++++++++++++++++++++---
 drivers/net/bnxt/meson.build                    |   26 +
 drivers/net/bnxt/tf_core/bitalloc.c             |  364 +++
 drivers/net/bnxt/tf_core/bitalloc.h             |  119 +
 drivers/net/bnxt/tf_core/hwrm_tf.h              |  992 ++++++
 drivers/net/bnxt/tf_core/lookup3.h              |  161 +
 drivers/net/bnxt/tf_core/rand.c                 |   47 +
 drivers/net/bnxt/tf_core/rand.h                 |   36 +
 drivers/net/bnxt/tf_core/stack.c                |  107 +
 drivers/net/bnxt/tf_core/stack.h                |  107 +
 drivers/net/bnxt/tf_core/tf_core.c              |  659 ++++
 drivers/net/bnxt/tf_core/tf_core.h              | 1376 ++++++++
 drivers/net/bnxt/tf_core/tf_em.c                |  516 +++
 drivers/net/bnxt/tf_core/tf_em.h                |  117 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h   |  166 +
 drivers/net/bnxt/tf_core/tf_msg.c               | 1248 ++++++++
 drivers/net/bnxt/tf_core/tf_msg.h               |  256 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h        |   47 +
 drivers/net/bnxt/tf_core/tf_project.h           |   24 +
 drivers/net/bnxt/tf_core/tf_resources.h         |  542 ++++
 drivers/net/bnxt/tf_core/tf_rm.c                | 3297 ++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h                |  321 ++
 drivers/net/bnxt/tf_core/tf_session.h           |  300 ++
 drivers/net/bnxt/tf_core/tf_tbl.c               | 1836 +++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.h               |  126 +
 drivers/net/bnxt/tf_core/tfp.c                  |  163 +
 drivers/net/bnxt/tf_core/tfp.h                  |  188 ++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h        |   54 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c              |  695 +++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h              |  110 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c         |  303 ++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c           |  626 ++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h           |  156 +
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 1502 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   69 +
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  271 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  111 +
 drivers/net/bnxt/tf_ulp/ulp_matcher.c           |  188 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h           |   35 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c        | 1208 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h        |  203 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 1712 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       |  354 +++
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h |  133 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  266 ++
 drivers/net/bnxt/tf_ulp/ulp_utils.c             |  521 ++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h             |  279 ++
 53 files changed, 25880 insertions(+), 495 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 01/34] net/bnxt: add updated dpdk hsi structure
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
                     ` (34 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Ajit Kumar Khaparde, Randy Schacher

From: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>

- Add most recent bnxt dpdk header.
- HWRM version updated to 1.10.1.30

Signed-off-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 3786 +++++++++++++++++++++++++++++---
 1 file changed, 3436 insertions(+), 350 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index c2bae0f..cde96e7 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 2014-2019 Broadcom Inc.
+ * Copyright (c) 2014-2020 Broadcom Inc.
  * All rights reserved.
  *
  * DO NOT MODIFY!!! This file is automatically generated.
@@ -386,6 +386,8 @@ struct cmd_nums {
 	#define HWRM_PORT_PHY_MDIO_READ                   UINT32_C(0xb6)
 	#define HWRM_PORT_PHY_MDIO_BUS_ACQUIRE            UINT32_C(0xb7)
 	#define HWRM_PORT_PHY_MDIO_BUS_RELEASE            UINT32_C(0xb8)
+	#define HWRM_PORT_QSTATS_EXT_PFC_WD               UINT32_C(0xb9)
+	#define HWRM_PORT_ECN_QSTATS                      UINT32_C(0xba)
 	#define HWRM_FW_RESET                             UINT32_C(0xc0)
 	#define HWRM_FW_QSTATUS                           UINT32_C(0xc1)
 	#define HWRM_FW_HEALTH_CHECK                      UINT32_C(0xc2)
@@ -404,6 +406,8 @@ struct cmd_nums {
 	#define HWRM_FW_GET_STRUCTURED_DATA               UINT32_C(0xcb)
 	/* Experimental */
 	#define HWRM_FW_IPC_MAILBOX                       UINT32_C(0xcc)
+	#define HWRM_FW_ECN_CFG                           UINT32_C(0xcd)
+	#define HWRM_FW_ECN_QCFG                          UINT32_C(0xce)
 	#define HWRM_EXEC_FWD_RESP                        UINT32_C(0xd0)
 	#define HWRM_REJECT_FWD_RESP                      UINT32_C(0xd1)
 	#define HWRM_FWD_RESP                             UINT32_C(0xd2)
@@ -419,6 +423,7 @@ struct cmd_nums {
 	#define HWRM_TEMP_MONITOR_QUERY                   UINT32_C(0xe0)
 	#define HWRM_REG_POWER_QUERY                      UINT32_C(0xe1)
 	#define HWRM_CORE_FREQUENCY_QUERY                 UINT32_C(0xe2)
+	#define HWRM_REG_POWER_HISTOGRAM                  UINT32_C(0xe3)
 	#define HWRM_WOL_FILTER_ALLOC                     UINT32_C(0xf0)
 	#define HWRM_WOL_FILTER_FREE                      UINT32_C(0xf1)
 	#define HWRM_WOL_FILTER_QCFG                      UINT32_C(0xf2)
@@ -510,7 +515,7 @@ struct cmd_nums {
 	#define HWRM_CFA_EEM_OP                           UINT32_C(0x123)
 	/* Experimental */
 	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS              UINT32_C(0x124)
-	/* Experimental */
+	/* Experimental - DEPRECATED */
 	#define HWRM_CFA_TFLIB                            UINT32_C(0x125)
 	/* Engine CKV - Get the current allocation status of keys provisioned in the key vault. */
 	#define HWRM_ENGINE_CKV_STATUS                    UINT32_C(0x12e)
@@ -629,6 +634,56 @@ struct cmd_nums {
 	 * to the host test.
 	 */
 	#define HWRM_MFG_HDMA_TEST                        UINT32_C(0x209)
+	/* Tells the fw to program the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_WRITE                 UINT32_C(0x20a)
+	/* Tells the fw to read the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_READ                  UINT32_C(0x20b)
+	/* Experimental */
+	#define HWRM_TF                                   UINT32_C(0x2bc)
+	/* Experimental */
+	#define HWRM_TF_VERSION_GET                       UINT32_C(0x2bd)
+	/* Experimental */
+	#define HWRM_TF_SESSION_OPEN                      UINT32_C(0x2c6)
+	/* Experimental */
+	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
+	/* Experimental */
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	/* Experimental */
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	/* Experimental */
+	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
 	#define HWRM_DBG_READ_DIRECT                      UINT32_C(0xff10)
 	/* Experimental */
@@ -658,6 +713,8 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_HEADER                 UINT32_C(0xff1d)
 	/* Experimental */
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
+	/* Send driver debug information to firmware */
+	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -857,8 +914,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 6
-#define HWRM_VERSION_STR "1.10.1.6"
+#define HWRM_VERSION_RSVD 30
+#define HWRM_VERSION_STR "1.10.1.30"
 
 /****************
  * hwrm_ver_get *
@@ -1143,6 +1200,7 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_ADV_FLOW_MGNT_SUPPORTED \
 		UINT32_C(0x1000)
 	/*
+	 * Deprecated and replaced with cfa_truflow_supported.
 	 * If set to 1, the firmware is able to support TFLIB features.
 	 * If set to 0, then the firmware doesn’t support TFLIB features.
 	 * By default, this flag should be 0 for older version of core firmware.
@@ -1150,6 +1208,14 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TFLIB_SUPPORTED \
 		UINT32_C(0x2000)
 	/*
+	 * If set to 1, the firmware is able to support TruFlow features.
+	 * If set to 0, then the firmware doesn’t support TruFlow features.
+	 * By default, this flag should be 0 for older version of
+	 * core firmware.
+	 */
+	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TRUFLOW_SUPPORTED \
+		UINT32_C(0x4000)
+	/*
 	 * This field represents the major version of RoCE firmware.
 	 * A change in major version represents a major release.
 	 */
@@ -4508,10 +4574,16 @@ struct hwrm_async_event_cmpl {
 	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_EEM_CFG_CHANGE \
 		UINT32_C(0x3c)
-	/* TFLIB unique default VNIC Configuration Change */
+	/*
+	 * Deprecated.
+	 * TFLIB unique default VNIC Configuration Change
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_DEFAULT_VNIC_CHANGE \
 		UINT32_C(0x3d)
-	/* TFLIB unique link status changed */
+	/*
+	 * Deprecated.
+	 * TFLIB unique link status changed
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_LINK_STATUS_CHANGE \
 		UINT32_C(0x3e)
 	/*
@@ -4521,6 +4593,19 @@ struct hwrm_async_event_cmpl {
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_QUIESCE_DONE \
 		UINT32_C(0x3f)
 	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred. This event is used on crypto controllers
+	 * only.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	/*
+	 * An event signifying that a PFC WatchDog configuration
+	 * has changed on any port / cos.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	/*
 	 * A trace log message. This contains firmware trace logs string
 	 * embedded in the asynchronous message. This is an experimental
 	 * event, not meant for production use at this time.
@@ -6393,6 +6478,36 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x2)
 	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_LAST \
 		HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_ERROR
+	/* opaque is 8 b */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_MASK \
+		UINT32_C(0xff00)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_SFT \
+		8
+	/*
+	 * Additional information about internal hardware state related to
+	 * idle/quiesce state.  QUIESCE may succeed per quiesce_status
+	 * regardless of idle_state_flags.  If QUIESCE fails, the host may
+	 * inspect idle_state_flags to determine whether a retry is warranted.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_MASK \
+		UINT32_C(0xff0000)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_SFT \
+		16
+	/*
+	 * Failure to quiesce is caused by host not updating the NQ consumer
+	 * index.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_INCOMPLETE_NQ \
+		UINT32_C(0x10000)
+	/* Flag 1 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_1 \
+		UINT32_C(0x20000)
+	/* Flag 2 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_2 \
+		UINT32_C(0x40000)
+	/* Flag 3 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_3 \
+		UINT32_C(0x80000)
 	uint8_t	opaque_v;
 	/*
 	 * This value is written by the NIC such that it will be different
@@ -6414,6 +6529,152 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x1)
 } __attribute__((packed));
 
+/* hwrm_async_event_cmpl_deferred_response (size:128b/16B) */
+struct hwrm_async_event_cmpl_deferred_response {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_SFT             0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE
+	/* Event specific data */
+	uint32_t	event_data2;
+	/*
+	 * The PF's mailbox is clear to issue another command.
+	 * A command with this seq_id is still in progress
+	 * and will return a regualr HWRM completion when done.
+	 * 'event_data1' field, if non-zero, contains the estimated
+	 * execution time for the command.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_MASK \
+		UINT32_C(0xffff)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_SFT \
+		0
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Estimated remaining time of command execution in ms (if not zero) */
+	uint32_t	event_data1;
+} __attribute__((packed));
+
+/* hwrm_async_event_cmpl_pfc_watchdog_cfg_change (size:128b/16B) */
+struct hwrm_async_event_cmpl_pfc_watchdog_cfg_change {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_SFT \
+		0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/* PFC watchdog configuration change for given port/cos */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE
+	/* Event specific data */
+	uint32_t	event_data2;
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Event specific data */
+	uint32_t	event_data1;
+	/*
+	 * 1 in bit position X indicates PFC watchdog should
+	 * be on for COSX
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_MASK \
+		UINT32_C(0xff)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_SFT \
+		0
+	/* 1 means PFC WD for COS0 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS0 \
+		UINT32_C(0x1)
+	/* 1 means PFC WD for COS1 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS1 \
+		UINT32_C(0x2)
+	/* 1 means PFC WD for COS2 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS2 \
+		UINT32_C(0x4)
+	/* 1 means PFC WD for COS3 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS3 \
+		UINT32_C(0x8)
+	/* 1 means PFC WD for COS4 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS4 \
+		UINT32_C(0x10)
+	/* 1 means PFC WD for COS5 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS5 \
+		UINT32_C(0x20)
+	/* 1 means PFC WD for COS6 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS6 \
+		UINT32_C(0x40)
+	/* 1 means PFC WD for COS7 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS7 \
+		UINT32_C(0x80)
+	/* PORT ID */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_MASK \
+		UINT32_C(0xffff00)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_SFT \
+		8
+} __attribute__((packed));
+
 /* hwrm_async_event_cmpl_fw_trace_msg (size:128b/16B) */
 struct hwrm_async_event_cmpl_fw_trace_msg {
 	uint16_t	type;
@@ -7220,7 +7481,7 @@ struct hwrm_func_qcaps_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcaps_output (size:640b/80B) */
+/* hwrm_func_qcaps_output (size:704b/88B) */
 struct hwrm_func_qcaps_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -7441,6 +7702,33 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_NOTIFY_VF_DEF_VNIC_CHNG_SUPPORTED \
 		UINT32_C(0x4000000)
+	/* If set to 1, then the vlan acceleration for TX is disabled. */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED \
+		UINT32_C(0x8000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_COREDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_COREDUMP_CMD_SUPPORTED \
+		UINT32_C(0x10000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_CRASHDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_CRASHDUMP_CMD_SUPPORTED \
+		UINT32_C(0x20000000)
+	/*
+	 * If the query is for a VF, then this flag should be ignored.
+	 * If the query is for a PF and this flag is set to 1, then
+	 * the PF has the capability to support retrieval of
+	 * rx_port_stats_ext_pfc_wd statistics (supported by the PFC
+	 * WatchDog feature) via the hwrm_port_qstats_ext_pfc_wd command.
+	 * If this flag is set to 1, only that (supported) command should
+	 * be used for retrieval of PFC related statistics (rather than
+	 * hwrm_port_qstats_ext command, which could previously be used).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
+		UINT32_C(0x40000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7551,7 +7839,22 @@ struct hwrm_func_qcaps_output {
 	 * (max_tx_rings) to the function.
 	 */
 	uint16_t	max_sp_tx_rings;
-	uint8_t	unused_0;
+	uint8_t	unused_0[2];
+	uint32_t	flags_ext;
+	/*
+	 * If 1, the device can be configured to set the ECN bits in the
+	 * IP header of received packets if the receive queue length
+	 * exceeds a given threshold.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_MARK_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * If 1, the device can report the number of received packets
+	 * that it marked as having experienced congestion.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
+		UINT32_C(0x2)
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -7606,7 +7909,7 @@ struct hwrm_func_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcfg_output (size:704b/88B) */
+/* hwrm_func_qcfg_output (size:768b/96B) */
 struct hwrm_func_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -8016,7 +8319,17 @@ struct hwrm_func_qcfg_output {
 	 * this value to find out the doorbell page offset from the BAR.
 	 */
 	uint16_t	legacy_l2_db_size_kb;
-	uint8_t	unused_2[1];
+	uint16_t	svif_info;
+	/*
+	 * This field specifies the source virtual interface of the function being
+	 * queried. Drivers can use this to program svif field in the L2 context
+	 * table
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK      UINT32_C(0x7fff)
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_SFT       0
+	/* This field specifies whether svif is valid or not */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID     UINT32_C(0x8000)
+	uint8_t	unused_2[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -9862,8 +10175,12 @@ struct hwrm_func_backing_store_qcaps_output {
 	uint32_t	rsvd;
 	/* Reserved for future. */
 	uint16_t	rsvd1;
-	/* Reserved for future. */
-	uint8_t	rsvd2;
+	/*
+	 * Count of TQM fastpath rings to be used for allocating backing store.
+	 * Backing store configuration must be specified for each TQM ring from
+	 * this count in `backing_store_cfg`.
+	 */
+	uint8_t	tqm_fp_rings_count;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -12178,116 +12495,163 @@ struct hwrm_error_recovery_qcfg_output {
 	 * this much time after writing reset_reg_val in reset_reg.
 	 */
 	uint8_t	delay_after_reset[16];
-	uint8_t	unused_1[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field
-	 * is written last.
-	 */
-	uint8_t	valid;
-} __attribute__((packed));
-
-/***********************
- * hwrm_func_vlan_qcfg *
- ***********************/
-
-
-/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
-struct hwrm_func_vlan_qcfg_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
+	 * Error recovery counter.
+	 * Lower 2 bits indicates address space location and upper 30 bits
+	 * indicates actual address.
+	 * A value of 0xFFFF-FFFF indicates this register does not exist.
 	 */
-	uint16_t	target_id;
+	uint32_t	err_recovery_cnt_reg;
+	/* Lower 2 bits indicates address space location. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_MASK \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_SFT \
+		0
 	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * If value is 0, this register is located in PCIe config space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint64_t	resp_addr;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_PCIE_CFG \
+		UINT32_C(0x0)
 	/*
-	 * Function ID of the function that is being
-	 * configured.
-	 * If set to 0xFF... (All Fs), then the configuration is
-	 * for the requesting function.
+	 * If value is 1, this register is located in GRC address space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	fid;
-	uint8_t	unused_0[6];
-} __attribute__((packed));
-
-/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
-struct hwrm_func_vlan_qcfg_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint64_t	unused_0;
-	/* S-TAG VLAN identifier configured for the function. */
-	uint16_t	stag_vid;
-	/* S-TAG PCP value configured for the function. */
-	uint8_t	stag_pcp;
-	uint8_t	unused_1;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_GRC \
+		UINT32_C(0x1)
 	/*
-	 * S-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 2, this register is located in first BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	stag_tpid;
-	/* C-TAG VLAN identifier configured for the function. */
-	uint16_t	ctag_vid;
-	/* C-TAG PCP value configured for the function. */
-	uint8_t	ctag_pcp;
-	uint8_t	unused_2;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR0 \
+		UINT32_C(0x2)
 	/*
-	 * C-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 3, this register is located in second BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	ctag_tpid;
-	/* Future use. */
-	uint32_t	rsvd2;
-	/* Future use. */
-	uint32_t	rsvd3;
-	uint8_t	unused_3[3];
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1 \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_LAST \
+		HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1
+	/* Upper 30bits of the register address. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_MASK \
+		UINT32_C(0xfffffffc)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SFT \
+		2
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
 	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/**********************
- * hwrm_func_vlan_cfg *
- **********************/
+/***********************
+ * hwrm_func_vlan_qcfg *
+ ***********************/
 
 
-/* hwrm_func_vlan_cfg_input (size:384b/48B) */
-struct hwrm_func_vlan_cfg_input {
+/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
+struct hwrm_func_vlan_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being
+	 * configured.
+	 * If set to 0xFF... (All Fs), then the configuration is
+	 * for the requesting function.
+	 */
+	uint16_t	fid;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
+struct hwrm_func_vlan_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint64_t	unused_0;
+	/* S-TAG VLAN identifier configured for the function. */
+	uint16_t	stag_vid;
+	/* S-TAG PCP value configured for the function. */
+	uint8_t	stag_pcp;
+	uint8_t	unused_1;
+	/*
+	 * S-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	stag_tpid;
+	/* C-TAG VLAN identifier configured for the function. */
+	uint16_t	ctag_vid;
+	/* C-TAG PCP value configured for the function. */
+	uint8_t	ctag_pcp;
+	uint8_t	unused_2;
+	/*
+	 * C-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	ctag_tpid;
+	/* Future use. */
+	uint32_t	rsvd2;
+	/* Future use. */
+	uint32_t	rsvd3;
+	uint8_t	unused_3[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_func_vlan_cfg *
+ **********************/
+
+
+/* hwrm_func_vlan_cfg_input (size:384b/48B) */
+struct hwrm_func_vlan_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -14039,6 +14403,9 @@ struct hwrm_port_phy_qcfg_output {
 	/* Module is not inserted. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTINSERTED \
 		UINT32_C(0x4)
+	/* Module is powered down becuase of over current fault. */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_CURRENTFAULT \
+		UINT32_C(0x5)
 	/* Module status is not applicable. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTAPPLICABLE \
 		UINT32_C(0xff)
@@ -15010,7 +15377,7 @@ struct hwrm_port_mac_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_port_mac_qcfg_output (size:192b/24B) */
+/* hwrm_port_mac_qcfg_output (size:256b/32B) */
 struct hwrm_port_mac_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -15250,6 +15617,20 @@ struct hwrm_port_mac_qcfg_output {
 		UINT32_C(0xe0)
 	#define HWRM_PORT_MAC_QCFG_OUTPUT_COS_FIELD_CFG_DEFAULT_COS_SFT \
 		5
+	uint8_t	unused_1;
+	uint16_t	port_svif_info;
+	/*
+	 * This field specifies the source virtual interface of the port being
+	 * queried. Drivers can use this to program port svif field in the
+	 * L2 context table
+	 */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK \
+		UINT32_C(0x7fff)
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_SFT       0
+	/* This field specifies whether port_svif is valid or not */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID \
+		UINT32_C(0x8000)
+	uint8_t	unused_2[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -15322,17 +15703,17 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_DIRECT_ACCESS \
 		UINT32_C(0x1)
 	/*
-	 * When this bit is set to '1', the PTP information is accessible
-	 * via HWRM commands.
-	 */
-	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
-		UINT32_C(0x2)
-	/*
 	 * When this bit is set to '1', the device supports one-step
 	 * Tx timestamping.
 	 */
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_ONE_STEP_TX_TS \
 		UINT32_C(0x4)
+	/*
+	 * When this bit is set to '1', the PTP information is accessible
+	 * via HWRM commands.
+	 */
+	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
+		UINT32_C(0x8)
 	uint8_t	unused_0[3];
 	/* Offset of the PTP register for the lower 32 bits of timestamp for RX. */
 	uint32_t	rx_ts_reg_off_lower;
@@ -15375,7 +15756,7 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics Formats */
+/* Port Tx Statistics Format */
 /* tx_port_stats (size:3264b/408B) */
 struct tx_port_stats {
 	/* Total Number of 64 Bytes frames transmitted */
@@ -15516,7 +15897,7 @@ struct tx_port_stats {
 	uint64_t	tx_stat_error;
 } __attribute__((packed));
 
-/* Port Rx Statistics Formats */
+/* Port Rx Statistics Format */
 /* rx_port_stats (size:4224b/528B) */
 struct rx_port_stats {
 	/* Total Number of 64 Bytes frames received */
@@ -15806,7 +16187,7 @@ struct hwrm_port_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics extended Formats */
+/* Port Tx Statistics extended Format */
 /* tx_port_stats_ext (size:2048b/256B) */
 struct tx_port_stats_ext {
 	/* Total number of tx bytes count on cos queue 0 */
@@ -15875,7 +16256,7 @@ struct tx_port_stats_ext {
 	uint64_t	pfc_pri7_tx_transitions;
 } __attribute__((packed));
 
-/* Port Rx Statistics extended Formats */
+/* Port Rx Statistics extended Format */
 /* rx_port_stats_ext (size:3648b/456B) */
 struct rx_port_stats_ext {
 	/* Number of times link state changed to down */
@@ -15997,6 +16378,424 @@ struct rx_port_stats_ext {
 	uint64_t	rx_discard_packets_cos7;
 } __attribute__((packed));
 
+/*
+ * Port Rx Statistics extended PFC WatchDog Format.
+ * StormDetect and StormRevert event determination is based
+ * on an integration period and a percentage threshold.
+ * StormDetect event - when percentage of XOFF frames receieved
+ * within an integration period exceeds the configured threshold.
+ * StormRevert event - when percentage of XON frames received
+ * within an integration period exceeds the configured threshold.
+ * Actual number of XOFF/XON frames for the events to be triggered
+ * depends on both configured integration period and sampling rate.
+ * The statistics in this structure represent counts of specified
+ * events from the moment the feature (PFC WatchDog) is enabled via
+ * hwrm_queue_pfc_enable_cfg call.
+ */
+/* rx_port_stats_ext_pfc_wd (size:5120b/640B) */
+struct rx_port_stats_ext_pfc_wd {
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri0;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri1;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri2;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri3;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri4;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri5;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri6;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri7;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri0;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri1;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri2;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri3;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri4;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri5;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri6;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri7;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri0;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri1;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri2;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri3;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri4;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri5;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri6;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri7;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri0;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri1;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri2;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri3;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri4;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri5;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri6;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri7;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri0;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri1;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri2;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri3;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri4;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri5;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri6;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri0;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri1;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri2;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri3;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri4;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri5;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri6;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri7;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri0;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri1;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri2;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri3;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri4;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri5;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri6;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri7;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri0;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri1;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri2;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri3;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri4;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri5;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri6;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri7;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri0;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri1;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri2;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri3;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri4;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri5;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri6;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri0;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri1;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri2;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri3;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri4;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri5;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri6;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri7;
+} __attribute__((packed));
+
 /************************
  * hwrm_port_qstats_ext *
  ************************/
@@ -16090,6 +16889,83 @@ struct hwrm_port_qstats_ext_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/*******************************
+ * hwrm_port_qstats_ext_pfc_wd *
+ *******************************/
+
+
+/* hwrm_port_qstats_ext_pfc_wd_input (size:256b/32B) */
+struct hwrm_port_qstats_ext_pfc_wd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Port ID of port that is being queried. */
+	uint16_t	port_id;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * block in bytes
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	unused_0[4];
+	/*
+	 * This is the host address where
+	 * rx_port_stats_ext_pfc_wd will be stored
+	 */
+	uint64_t	pfc_wd_stat_host_addr;
+} __attribute__((packed));
+
+/* hwrm_port_qstats_ext_pfc_wd_output (size:128b/16B) */
+struct hwrm_port_qstats_ext_pfc_wd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * statistics block in bytes.
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	flags;
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+	uint8_t	unused_0[4];
+} __attribute__((packed));
+
 /*************************
  * hwrm_port_lpbk_qstats *
  *************************/
@@ -16168,6 +17044,91 @@ struct hwrm_port_lpbk_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/************************
+ * hwrm_port_ecn_qstats *
+ ************************/
+
+
+/* hwrm_port_ecn_qstats_input (size:192b/24B) */
+struct hwrm_port_ecn_qstats_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Port ID of port that is being queried. Unused if NIC is in
+	 * multi-host mode.
+	 */
+	uint16_t	port_id;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_port_ecn_qstats_output (size:384b/48B) */
+struct hwrm_port_ecn_qstats_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of packets marked in CoS queue 0. */
+	uint32_t	mark_cnt_cos0;
+	/* Number of packets marked in CoS queue 1. */
+	uint32_t	mark_cnt_cos1;
+	/* Number of packets marked in CoS queue 2. */
+	uint32_t	mark_cnt_cos2;
+	/* Number of packets marked in CoS queue 3. */
+	uint32_t	mark_cnt_cos3;
+	/* Number of packets marked in CoS queue 4. */
+	uint32_t	mark_cnt_cos4;
+	/* Number of packets marked in CoS queue 5. */
+	uint32_t	mark_cnt_cos5;
+	/* Number of packets marked in CoS queue 6. */
+	uint32_t	mark_cnt_cos6;
+	/* Number of packets marked in CoS queue 7. */
+	uint32_t	mark_cnt_cos7;
+	/*
+	 * Bitmask that indicates which CoS queues have ECN marking enabled.
+	 * Bit i corresponds to CoS queue i.
+	 */
+	uint8_t	mark_en;
+	uint8_t	unused_0[6];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
 /***********************
  * hwrm_port_clr_stats *
  ***********************/
@@ -18322,7 +19283,7 @@ struct hwrm_port_phy_mdio_bus_acquire_input {
 	 * Timeout in milli seconds, MDIO BUS will be released automatically
 	 * after this time, if another mdio acquire command is not received
 	 * within the timeout window from the same client.
-	 * A 0xFFFF will hold the bus until this bus is released.
+	 * A 0xFFFF will hold the bus untill this bus is released.
 	 */
 	uint16_t	mdio_bus_timeout;
 	uint8_t	unused_0[2];
@@ -19158,6 +20119,30 @@ struct hwrm_queue_pfcenable_qcfg_output {
 	/* If set to 1, then PFC is enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	uint8_t	unused_0[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -19229,6 +20214,30 @@ struct hwrm_queue_pfcenable_cfg_input {
 	/* If set to 1, then PFC is requested to be enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	/*
 	 * Port ID of port for which the table is being configured.
 	 * The HWRM needs to check whether this function is allowed
@@ -31831,15 +32840,2172 @@ struct hwrm_cfa_eem_qcfg_input {
 	 */
 	uint64_t	resp_addr;
 	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint32_t	unused_0;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint32_t	unused_0;
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
+struct hwrm_cfa_eem_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
+		UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
+		UINT32_C(0x2)
+	/* When set to 1, all offloaded flows will be sent to EEM. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x4)
+	/* The number of entries the FW has configured for EEM. */
+	uint32_t	num_entries;
+	/* Configured EEM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EEM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EEM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	uint8_t	unused_2[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*******************
+ * hwrm_cfa_eem_op *
+ *******************/
+
+
+/* hwrm_cfa_eem_op_input (size:192b/24B) */
+struct hwrm_cfa_eem_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the TX flow offload function specified in fid.
+	 * Note if this bit is set then the path_rx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the RX flow offload function specified in fid.
+	 * Note if this bit is set then the path_tx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint16_t	unused_0;
+	/* The number of EEM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
+	/*
+	 * To properly stop EEM and ensure there are no DMA's, the caller
+	 * must disable EEM for the given PF, using this call. This will
+	 * safely disable EEM and ensure that all DMA'ed to the
+	 * keys/records/efc have been completed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EEM host memory has been configured, EEM options have
+	 * been configured. Then the caller should enable EEM for the given
+	 * PF. Note once this call has been made, then the EEM mechanism
+	 * will be active and DMA's will occur as packets are processed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EEM settings for the given PF so that the register values
+	 * are reset back to there initial state.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
+	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
+		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_op_output (size:128b/16B) */
+struct hwrm_cfa_eem_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************************
+ * hwrm_cfa_adv_flow_mgnt_qcaps *
+ ********************************/
+
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	unused_0[4];
+} __attribute__((packed));
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * Value of 1 to indicate firmware support 16-bit flow handle.
+	 * Value of 0 to indicate firmware not support 16-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * Value of 1 to indicate firmware support 64-bit flow handle.
+	 * Value of 0 to indicate firmware not support 64-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
+		UINT32_C(0x2)
+	/*
+	 * Value of 1 to indicate firmware support flow batch delete operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 to indicate that the firmware does not support flow batch delete
+	 * operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * Value of 1 to indicate that the firmware support flow reset all operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 indicates firmware does not support flow reset all operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
+		UINT32_C(0x8)
+	/*
+	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
+	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
+	 * Value of 0 indicates firmware does not support use of FID as dest_id.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
+		UINT32_C(0x10)
+	/*
+	 * Value of 1 to indicate that firmware supports TX EEM flows.
+	 * Value of 0 indicates firmware does not support TX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x20)
+	/*
+	 * Value of 1 to indicate that firmware supports RX EEM flows.
+	 * Value of 0 indicates firmware does not support RX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x40)
+	/*
+	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
+	 * on-chip flow counter which can be used for EEM flows.
+	 * Value of 0 indicates firmware does not support the dynamic allocation of an
+	 * on-chip flow counter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
+		UINT32_C(0x80)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
+	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
+		UINT32_C(0x100)
+	/*
+	 * Value of 1 to indicate that firmware supports untagged matching
+	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
+	 * indicates firmware does not support untagged matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
+		UINT32_C(0x200)
+	/*
+	 * Value of 1 to indicate that firmware supports XDP filter. Value
+	 * of 0 indicates firmware does not support XDP filter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
+		UINT32_C(0x400)
+	/*
+	 * Value of 1 to indicate that the firmware support L2 header source
+	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
+	 * Value of 0 indicates firmware does not support L2 header source
+	 * fields matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
+		UINT32_C(0x800)
+	/*
+	 * If set to 1, firmware is capable of supporting ARP ethertype as
+	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
+	 * RX direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
+		UINT32_C(0x1000)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
+	 * command. Value of 0 indicates firmware does not support
+	 * rfs_ring_tbl_idx in dst_id field.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
+		UINT32_C(0x2000)
+	/*
+	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
+	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
+	 * direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
+		UINT32_C(0x4000)
+	uint8_t	unused_0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************
+ * hwrm_cfa_tflib *
+ ******************/
+
+
+/* hwrm_cfa_tflib_input (size:1024b/128B) */
+struct hwrm_cfa_tflib_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TFLIB request data. */
+	uint32_t	tf_req[26];
+} __attribute__((packed));
+
+/* hwrm_cfa_tflib_output (size:5632b/704B) */
+struct hwrm_cfa_tflib_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* TFLIB response code */
+	uint32_t	tf_resp_code;
+	/* TFLIB response data. */
+	uint32_t	tf_resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********
+ * hwrm_tf *
+ ***********/
+
+
+/* hwrm_tf_input (size:1024b/128B) */
+struct hwrm_tf_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TF request data. */
+	uint32_t	req[26];
+} __attribute__((packed));
+
+/* hwrm_tf_output (size:5632b/704B) */
+struct hwrm_tf_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* TF response code */
+	uint32_t	resp_code;
+	/* TF response data. */
+	uint32_t	resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_version_get *
+ ***********************/
+
+
+/* hwrm_tf_version_get_input (size:128b/16B) */
+struct hwrm_tf_version_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_open_output (size:128b/16B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_session_close *
+ *************************/
+
+
+/* hwrm_tf_session_close_input (size:192b/24B) */
+struct hwrm_tf_session_close_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_close_output (size:128b/16B) */
+struct hwrm_tf_session_close_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_qcfg *
+ ************************/
+
+
+/* hwrm_tf_session_qcfg_input (size:192b/24B) */
+struct hwrm_tf_session_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_qcfg_output (size:128b/16B) */
+struct hwrm_tf_session_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* RX action control settings flags. */
+	uint8_t	rx_act_flags;
+	/*
+	 * A value of 1 in this field indicates that Global Flow ID
+	 * reporting into cfa_code and cfa_metadata is enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_GFID_EN \
+		UINT32_C(0x1)
+	/*
+	 * A value of 1 in this field indicates that both inner and outer
+	 * are stripped and inner tag is passed.
+	 * Enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_VTAG_DLT_BOTH \
+		UINT32_C(0x2)
+	/*
+	 * A value of 1 in this field indicates that the re-use of
+	 * existing tunnel L2 header SMAC is enabled for
+	 * Non-tunnel L2, L2-L3 and IP-IP tunnel.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_TECT_SMAC_OVR_RUTNSL2 \
+		UINT32_C(0x4)
+	/* TX Action control settings flags. */
+	uint8_t	tx_act_flags;
+	/* Disabled. */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_ABCR_VEB_EN \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1 any GRE tunnels will include the
+	 * optional Key field.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_GRE_SET_K \
+		UINT32_C(0x2)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV6 Traffic Class (TC)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap
+	 * record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV6_TC_IH \
+		UINT32_C(0x4)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV4 Type Of Service (TOS)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV4_TOS_IH \
+		UINT32_C(0x8)
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_qcaps *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_qcaps_input (size:256b/32B) */
+struct hwrm_tf_session_resc_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * buffer. The size should be set to the Resource Manager
+	 * provided max qcaps value that is device specific. This is
+	 * the max size possible.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the qcaps output data
+	 * array. Array is of tf_rm_cap type and is device specific.
+	 */
+	uint64_t	qcaps_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_qcaps_output (size:192b/24B) */
+struct hwrm_tf_session_resc_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Session reservation strategy. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+		0
+	/* Static partitioning. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+		UINT32_C(0x0)
+	/* Strategy 1. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+		UINT32_C(0x1)
+	/* Strategy 2. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+		UINT32_C(0x2)
+	/* Strategy 3. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	/*
+	 * Size of the returned tf_rm_cap data array. The value
+	 * cannot exceed the size defined by the input msg. The data
+	 * array is returned using the qcaps_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
+	/* unused. */
+	uint16_t	unused0;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_alloc *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+struct hwrm_tf_session_resc_alloc_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided num_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the num input data array
+	 * buffer. Array is of tf_rm_num type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	num_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
+struct hwrm_tf_session_resc_alloc_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*****************************
+ * hwrm_tf_session_resc_free *
+ *****************************/
+
+
+/* hwrm_tf_session_resc_free_input (size:256b/32B) */
+struct hwrm_tf_session_resc_free_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided free_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the free input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size field of this message.
+	 */
+	uint64_t	free_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_free_output (size:128b/16B) */
+struct hwrm_tf_session_resc_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_flush *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_flush_input (size:256b/32B) */
+struct hwrm_tf_session_resc_flush_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided flush_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the flush input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	flush_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_flush_output (size:128b/16B) */
+struct hwrm_tf_session_resc_flush_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/* TruFlow RM capability of a resource. */
+/* tf_rm_cap (size:64b/8B) */
+struct tf_rm_cap {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Minimum value. */
+	uint16_t	min;
+	/* Maximum value. */
+	uint16_t	max;
+} __attribute__((packed));
+
+/* TruFlow RM number of a resource. */
+/* tf_rm_num (size:64b/8B) */
+struct tf_rm_num {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of resources. */
+	uint32_t	num;
+} __attribute__((packed));
+
+/* TruFlow RM reservation information. */
+/* tf_rm_res (size:64b/8B) */
+struct tf_rm_res {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Start offset. */
+	uint16_t	start;
+	/* Number of resources. */
+	uint16_t	stride;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_get *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_get_input (size:256b/32B) */
+struct hwrm_tf_tbl_type_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_get_output (size:1216b/152B) */
+struct hwrm_tf_tbl_type_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Response code. */
+	uint32_t	resp_code;
+	/* Response size. */
+	uint16_t	size;
+	/* unused */
+	uint16_t	unused0;
+	/* Response data. */
+	uint8_t	data[128];
+	/* unused */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_set *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_set_input (size:1024b/128B) */
+struct hwrm_tf_tbl_type_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+	/* Size of the data to set. */
+	uint16_t	size;
+	/* unused */
+	uint8_t	unused1[6];
+	/* Data to be set. */
+	uint8_t	data[88];
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_set_output (size:128b/16B) */
+struct hwrm_tf_tbl_type_set_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_ctxt_mem_rgtr *
+ *************************/
+
+
+/* hwrm_tf_ctxt_mem_rgtr_input (size:256b/32B) */
+struct hwrm_tf_ctxt_mem_rgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Counter PBL indirect levels. */
+	uint8_t	page_level;
+	/* PBL pointer is physical start address. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_0 UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_1 UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing
+	 * to PTE tables.
+	 */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2 UINT32_C(0x2)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2
+	/* Page size. */
+	uint8_t	page_size;
+	/* 4KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K   UINT32_C(0x0)
+	/* 8KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K   UINT32_C(0x1)
+	/* 64KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K  UINT32_C(0x4)
+	/* 256KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K UINT32_C(0x6)
+	/* 1MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M   UINT32_C(0x8)
+	/* 2MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M   UINT32_C(0x9)
+	/* 4MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M   UINT32_C(0xa)
+	/* 1GB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G   UINT32_C(0x12)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+	/* unused. */
+	uint32_t	unused0;
+	/* Pointer to the PBL, or PDL depending on number of levels */
+	uint64_t	page_dir;
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_rgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_rgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***************************
+ * hwrm_tf_ctxt_mem_unrgtr *
+ ***************************/
+
+
+/* hwrm_tf_ctxt_mem_unrgtr_input (size:192b/24B) */
+struct hwrm_tf_ctxt_mem_unrgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[6];
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_unrgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_unrgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_ext_em_qcaps *
+ ************************/
+
+
+/* hwrm_tf_ext_em_qcaps_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcaps_output (size:320b/40B) */
+struct hwrm_tf_ext_em_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the the FW supports the Centralized
+	 * Memory Model. The concept designates one entity for the
+	 * memory allocation while all others ‘subscribe’ to it.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the the FW supports the Detached
+	 * Centralized Memory Model. The memory is allocated and managed
+	 * as a separate entity. All PFs and VFs will be granted direct
+	 * or semi-direct access to the allocated memory while none of
+	 * which can interfere with the management of the memory.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_DETACHED_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+	/* Support flags. */
+	uint32_t	supported;
+	/*
+	 * If set to 1, then EXT EM KEY0 table is supported using
+	 * crc32 hash.
+	 * If set to 0, EXT EM KEY0 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY0_TABLE \
+		UINT32_C(0x1)
+	/*
+	 * If set to 1, then EXT EM KEY1 table is supported using
+	 * lookup3 hash.
+	 * If set to 0, EXT EM KEY1 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY1_TABLE \
+		UINT32_C(0x2)
+	/*
+	 * If set to 1, then EXT EM External Record table is supported.
+	 * If set to 0, EXT EM External Record table is not
+	 * supported.  (This table includes action record, EFC
+	 * pointers, encap pointers)
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_RECORD_TABLE \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then EXT EM External Flow Counters table is
+	 * supported.
+	 * If set to 0, EXT EM External Flow Counters table is not
+	 * supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_FLOW_COUNTERS_TABLE \
+		UINT32_C(0x8)
+	/*
+	 * If set to 1, then FID table used for implicit flow flush
+	 * is supported.
+	 * If set to 0, then FID table used for implicit flow flush
+	 * is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_FID_TABLE \
+		UINT32_C(0x10)
+	/*
+	 * The maximum number of entries supported by EXT EM. When
+	 * configuring the host memory the number of numbers of
+	 * entries that can supported are -
+	 *      32k, 64k 128k, 256k, 512k, 1M, 2M, 4M, 8M, 32M, 64M,
+	 *      128M entries.
+	 * Any value that are not these values, the FW will round
+	 * down to the closest support number of entries.
+	 */
+	uint32_t	max_entries_supported;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM
+	 * KEY0/KEY1 tables.
+	 */
+	uint16_t	key_entry_size;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM RECORD
+	 * tables.
+	 */
+	uint16_t	record_entry_size;
+	/* The entry size in bytes of each entry in the EXT EM EFC tables. */
+	uint16_t	efc_entry_size;
+	/* The FID size in bytes of each entry in the EXT EM FID tables. */
+	uint16_t	fid_entry_size;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*********************
+ * hwrm_tf_ext_em_op *
+ *********************/
+
+
+/* hwrm_tf_ext_em_op_input (size:192b/24B) */
+struct hwrm_tf_ext_em_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint16_t	unused0;
+	/* The number of EXT EM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_RESERVED       UINT32_C(0x0)
+	/*
+	 * To properly stop EXT EM and ensure there are no DMA's,
+	 * the caller must disable EXT EM for the given PF, using
+	 * this call. This will safely disable EXT EM and ensure
+	 * that all DMA'ed to the keys/records/efc have been
+	 * completed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EXT EM host memory has been configured, EXT EM
+	 * options have been configured. Then the caller should
+	 * enable EXT EM for the given PF. Note once this call has
+	 * been made, then the EXT EM mechanism will be active and
+	 * DMA's will occur as packets are processed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EXT EM settings for the given PF so that the
+	 * register values are reset back to their initial state.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP UINT32_C(0x3)
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP
+	/* unused. */
+	uint16_t	unused1;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_op_output (size:128b/16B) */
+struct hwrm_tf_ext_em_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_tf_ext_em_cfg *
+ **********************/
+
+
+/* hwrm_tf_ext_em_cfg_input (size:384b/48B) */
+struct hwrm_tf_ext_em_cfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* When set to 1, secondary, 0 means primary. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_SECONDARY_PF \
+		UINT32_C(0x4)
+	/*
+	 * Group_id which used by Firmware to identify memory pools belonging
+	 * to certain group.
+	 */
+	uint16_t	group_id;
+	/*
+	 * Dynamically reconfigure EEM pending cache every 1/10th of second.
+	 * If set to 0 it will disable the EEM HW flush of the pending cache.
+	 */
+	uint8_t	flush_interval;
+	/* unused. */
+	uint8_t	unused0;
+	/*
+	 * Configured EXT EM with the given number of entries. All
+	 * the EXT EM tables KEY0, KEY1, RECORD, EFC all have the
+	 * same number of entries and all tables will be configured
+	 * using this value. Current minimum value is 32k. Current
+	 * maximum value is 128M.
+	 */
+	uint32_t	num_entries;
+	/* unused. */
+	uint32_t	unused1;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint16_t	unused2;
+	/* unused. */
+	uint32_t	unused3;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_cfg_output (size:128b/16B) */
+struct hwrm_tf_ext_em_cfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_ext_em_qcfg *
+ ***********************/
+
+
+/* hwrm_tf_ext_em_qcfg_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcfg_output (size:256b/32B) */
+struct hwrm_tf_ext_em_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* The number of entries the FW has configured for EXT EM. */
+	uint32_t	num_entries;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************
+ * hwrm_tf_tcam_set *
+ ********************/
+
+
+/* hwrm_tf_tcam_set_input (size:1024b/128B) */
+struct hwrm_tf_tcam_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX
+	/*
+	 * Indicate device data is being sent via DMA, the device
+	 * data is packing does not change.
+	 */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA     UINT32_C(0x2)
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of TCAM entry. */
+	uint16_t	idx;
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM result. */
+	uint8_t	result_size;
+	/*
+	 * Offset from which the mask bytes start in the device data
+	 * array, key offset is always 0.
+	 */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[6];
+	/*
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[88];
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
-struct hwrm_cfa_eem_qcfg_output {
+/* hwrm_tf_tcam_set_output (size:128b/16B) */
+struct hwrm_tf_tcam_set_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31848,46 +35014,26 @@ struct hwrm_cfa_eem_qcfg_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
-		UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
-		UINT32_C(0x2)
-	/* When set to 1, all offloaded flows will be sent to EEM. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
-		UINT32_C(0x4)
-	/* The number of entries the FW has configured for EEM. */
-	uint32_t	num_entries;
-	/* Configured EEM with the given context if for KEY0 table. */
-	uint16_t	key0_ctx_id;
-	/* Configured EEM with the given context if for KEY1 table. */
-	uint16_t	key1_ctx_id;
-	/* Configured EEM with the given context if for RECORD table. */
-	uint16_t	record_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	efc_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	fid_ctx_id;
-	uint8_t	unused_2[5];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/*******************
- * hwrm_cfa_eem_op *
- *******************/
+/********************
+ * hwrm_tf_tcam_get *
+ ********************/
 
 
-/* hwrm_cfa_eem_op_input (size:192b/24B) */
-struct hwrm_cfa_eem_op_input {
+/* hwrm_tf_tcam_get_input (size:256b/32B) */
+struct hwrm_tf_tcam_get_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -31916,49 +35062,31 @@ struct hwrm_cfa_eem_op_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
 	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX
 	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the TX flow offload function specified in fid.
-	 * Note if this bit is set then the path_rx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the RX flow offload function specified in fid.
-	 * Note if this bit is set then the path_tx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint16_t	unused_0;
-	/* The number of EEM key table entries to be configured. */
-	uint16_t	op;
-	/* This value is reserved and should not be used. */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
-	/*
-	 * To properly stop EEM and ensure there are no DMA's, the caller
-	 * must disable EEM for the given PF, using this call. This will
-	 * safely disable EEM and ensure that all DMA'ed to the
-	 * keys/records/efc have been completed.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
-	/*
-	 * Once the EEM host memory has been configured, EEM options have
-	 * been configured. Then the caller should enable EEM for the given
-	 * PF. Note once this call has been made, then the EEM mechanism
-	 * will be active and DMA's will occur as packets are processed.
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
 	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
-	/*
-	 * Clear EEM settings for the given PF so that the register values
-	 * are reset back to there initial state.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
-	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
-		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+	uint32_t	type;
+	/* Index of a TCAM entry. */
+	uint16_t	idx;
+	/* unused. */
+	uint16_t	unused0;
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_op_output (size:128b/16B) */
-struct hwrm_cfa_eem_op_output {
+/* hwrm_tf_tcam_get_output (size:2368b/296B) */
+struct hwrm_tf_tcam_get_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31967,24 +35095,41 @@ struct hwrm_cfa_eem_op_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM entry. */
+	uint8_t	result_size;
+	/* Offset from which the mask bytes start in the device data array. */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[4];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[272];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/********************************
- * hwrm_cfa_adv_flow_mgnt_qcaps *
- ********************************/
+/*********************
+ * hwrm_tf_tcam_move *
+ *********************/
 
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+/* hwrm_tf_tcam_move_input (size:1024b/128B) */
+struct hwrm_tf_tcam_move_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32013,11 +35158,33 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	uint32_t	unused_0[4];
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index pairs to be swapped for the device. */
+	uint16_t	count;
+	/* unused. */
+	uint16_t	unused0;
+	/* TCAM index pairs to be swapped for the device. */
+	uint16_t	idx_pairs[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+/* hwrm_tf_tcam_move_output (size:128b/16B) */
+struct hwrm_tf_tcam_move_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32026,131 +35193,26 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/*
-	 * Value of 1 to indicate firmware support 16-bit flow handle.
-	 * Value of 0 to indicate firmware not support 16-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
-		UINT32_C(0x1)
-	/*
-	 * Value of 1 to indicate firmware support 64-bit flow handle.
-	 * Value of 0 to indicate firmware not support 64-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
-		UINT32_C(0x2)
-	/*
-	 * Value of 1 to indicate firmware support flow batch delete operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 to indicate that the firmware does not support flow batch delete
-	 * operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
-		UINT32_C(0x4)
-	/*
-	 * Value of 1 to indicate that the firmware support flow reset all operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 indicates firmware does not support flow reset all operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
-		UINT32_C(0x8)
-	/*
-	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
-	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
-	 * Value of 0 indicates firmware does not support use of FID as dest_id.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
-		UINT32_C(0x10)
-	/*
-	 * Value of 1 to indicate that firmware supports TX EEM flows.
-	 * Value of 0 indicates firmware does not support TX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x20)
-	/*
-	 * Value of 1 to indicate that firmware supports RX EEM flows.
-	 * Value of 0 indicates firmware does not support RX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x40)
-	/*
-	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
-	 * on-chip flow counter which can be used for EEM flows.
-	 * Value of 0 indicates firmware does not support the dynamic allocation of an
-	 * on-chip flow counter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
-		UINT32_C(0x80)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
-	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
-		UINT32_C(0x100)
-	/*
-	 * Value of 1 to indicate that firmware supports untagged matching
-	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
-	 * indicates firmware does not support untagged matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
-		UINT32_C(0x200)
-	/*
-	 * Value of 1 to indicate that firmware supports XDP filter. Value
-	 * of 0 indicates firmware does not support XDP filter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
-		UINT32_C(0x400)
-	/*
-	 * Value of 1 to indicate that the firmware support L2 header source
-	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
-	 * Value of 0 indicates firmware does not support L2 header source
-	 * fields matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
-		UINT32_C(0x800)
-	/*
-	 * If set to 1, firmware is capable of supporting ARP ethertype as
-	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
-	 * RX direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
-		UINT32_C(0x1000)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
-	 * command. Value of 0 indicates firmware does not support
-	 * rfs_ring_tbl_idx in dst_id field.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
-		UINT32_C(0x2000)
-	/*
-	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
-	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
-	 * direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
-		UINT32_C(0x4000)
-	uint8_t	unused_0[3];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/******************
- * hwrm_cfa_tflib *
- ******************/
+/*********************
+ * hwrm_tf_tcam_free *
+ *********************/
 
 
-/* hwrm_cfa_tflib_input (size:1024b/128B) */
-struct hwrm_cfa_tflib_input {
+/* hwrm_tf_tcam_free_input (size:1024b/128B) */
+struct hwrm_tf_tcam_free_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32179,18 +35241,33 @@ struct hwrm_cfa_tflib_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index to be deleted for the device. */
+	uint16_t	count;
 	/* unused. */
-	uint8_t	unused0[4];
-	/* TFLIB request data. */
-	uint32_t	tf_req[26];
+	uint16_t	unused0;
+	/* TCAM index list to be deleted for the device. */
+	uint16_t	idx_list[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_tflib_output (size:5632b/704B) */
-struct hwrm_cfa_tflib_output {
+/* hwrm_tf_tcam_free_output (size:128b/16B) */
+struct hwrm_tf_tcam_free_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32199,22 +35276,15 @@ struct hwrm_cfa_tflib_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
-	/* TFLIB response code */
-	uint32_t	tf_resp_code;
-	/* TFLIB response data. */
-	uint32_t	tf_resp[170];
 	/* unused. */
-	uint8_t	unused1[7];
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
@@ -33155,9 +36225,9 @@ struct pcie_ctx_hw_stats {
 	uint64_t	pcie_tl_signal_integrity;
 	/* Number of times LTSSM entered Recovery state */
 	uint64_t	pcie_link_integrity;
-	/* Number of TLP bytes that have been transmitted */
+	/* Report number of TLP bits that have been transmitted in Mbps */
 	uint64_t	pcie_tx_traffic_rate;
-	/* Number of TLP bytes that have been received */
+	/* Report number of TLP bits that have been received in Mbps */
 	uint64_t	pcie_rx_traffic_rate;
 	/* Number of DLLP bytes that have been transmitted */
 	uint64_t	pcie_tx_dllp_statistics;
@@ -33981,7 +37051,23 @@ struct hwrm_nvm_modify_input {
 	uint64_t	host_src_addr;
 	/* 16-bit directory entry index. */
 	uint16_t	dir_idx;
-	uint8_t	unused_0[2];
+	uint16_t	flags;
+	/*
+	 * This flag indicates the sender wants to modify a continuous NVRAM
+	 * area using a batch of this HWRM requests. The offset of a request
+	 * must be continuous to the end of previous request's. Firmware does
+	 * not update the directory entry until receiving the last request,
+	 * which is indicated by the batch_last flag.
+	 * This flag is set usually when a sender does not have a block of
+	 * memory that is big enough to hold the entire NVRAM data for send
+	 * at one time.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_MODE     UINT32_C(0x1)
+	/*
+	 * This flag can be used only when the batch_mode flag is set.
+	 * It indicates this request is the last of batch requests.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_LAST     UINT32_C(0x2)
 	/* 32-bit NVRAM byte-offset to modify content from. */
 	uint32_t	offset;
 	/*
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 02/34] net/bnxt: update hwrm prep to use ptr
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
                     ` (33 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher

From: Randy Schacher <stuart.schacher@broadcom.com>

- Change HWRM_PREP to use pointer and use the full
  HWRM enum

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   2 +-
 drivers/net/bnxt/bnxt_hwrm.c | 200 +++++++++++++++++++++----------------------
 2 files changed, 101 insertions(+), 101 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3ae08a2..b795ed6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -594,7 +594,7 @@ struct bnxt {
 
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
 
-	uint16_t			hwrm_cmd_seq;
+	uint16_t			chimp_cmd_seq;
 	uint16_t			kong_cmd_seq;
 	void				*hwrm_cmd_resp_addr;
 	rte_iova_t			hwrm_cmd_resp_dma_addr;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index a9c9c72..79e4156 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -182,19 +182,19 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
  *
  * HWRM_UNLOCK() must be called after all response processing is completed.
  */
-#define HWRM_PREP(req, type, kong) do { \
+#define HWRM_PREP(req, type, kong) do {	\
 	rte_spinlock_lock(&bp->hwrm_lock); \
 	if (bp->hwrm_cmd_resp_addr == NULL) { \
 		rte_spinlock_unlock(&bp->hwrm_lock); \
 		return -EACCES; \
 	} \
 	memset(bp->hwrm_cmd_resp_addr, 0, bp->max_resp_len); \
-	req.req_type = rte_cpu_to_le_16(HWRM_##type); \
-	req.cmpl_ring = rte_cpu_to_le_16(-1); \
-	req.seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
-		rte_cpu_to_le_16(bp->hwrm_cmd_seq++); \
-	req.target_id = rte_cpu_to_le_16(0xffff); \
-	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
+	(req)->req_type = rte_cpu_to_le_16(type); \
+	(req)->cmpl_ring = rte_cpu_to_le_16(-1); \
+	(req)->seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
+		rte_cpu_to_le_16(bp->chimp_cmd_seq++); \
+	(req)->target_id = rte_cpu_to_le_16(0xffff); \
+	(req)->resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
 } while (0)
 
 #define HWRM_CHECK_RESULT_SILENT() do {\
@@ -263,7 +263,7 @@ int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_cfa_l2_set_rx_mask_input req = {.req_type = 0 };
 	struct hwrm_cfa_l2_set_rx_mask_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.mask = 0;
 
@@ -288,7 +288,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 	if (vnic->fw_vnic_id == INVALID_HW_RING_ID)
 		return rc;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
 	if (vnic->flags & BNXT_VNIC_INFO_BCAST)
@@ -347,7 +347,7 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 				return 0;
 		}
 	}
-	HWRM_PREP(req, CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(fid);
 
 	req.vlan_tag_mask_tbl_addr =
@@ -389,7 +389,7 @@ int bnxt_hwrm_clear_l2_filter(struct bnxt *bp,
 	if (l2_filter->l2_ref_cnt > 0)
 		return 0;
 
-	HWRM_PREP(req, CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
 
@@ -440,7 +440,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	if (filter->fw_l2_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_l2_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -503,7 +503,7 @@ int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (ptp->rx_filter)
 		flags |= HWRM_PORT_MAC_CFG_INPUT_FLAGS_PTP_RX_TS_CAPTURE_ENABLE;
@@ -536,7 +536,7 @@ static int bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
 	if (ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(bp->pf.port_id);
 
@@ -591,7 +591,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	uint32_t flags;
 	int i;
 
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 
@@ -721,7 +721,7 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp)
 	struct hwrm_vnic_qcaps_input req = {.req_type = 0 };
 	struct hwrm_vnic_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.target_id = rte_cpu_to_le_16(0xffff);
 
@@ -748,7 +748,7 @@ int bnxt_hwrm_func_reset(struct bnxt *bp)
 	struct hwrm_func_reset_input req = {.req_type = 0 };
 	struct hwrm_func_reset_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_RESET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESET, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(0);
 
@@ -781,7 +781,7 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 	if ((BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) && !BNXT_STINGRAY(bp))
 		flags |= HWRM_FUNC_DRV_RGTR_INPUT_FLAGS_MASTER_SUPPORT;
 
-	HWRM_PREP(req, FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER |
 			HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD);
 	req.ver_maj = RTE_VER_YEAR;
@@ -853,7 +853,7 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
 	struct hwrm_func_vf_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_vf_cfg_input req = {0};
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	enables = HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_RX_RINGS  |
 		  HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_TX_RINGS   |
@@ -919,7 +919,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 	struct hwrm_func_resource_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_resource_qcaps_input req = {0};
 
-	HWRM_PREP(req, FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -964,7 +964,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp, uint32_t timeout)
 
 	bp->max_req_len = HWRM_MAX_REQ_LEN;
 	bp->hwrm_cmd_timeout = timeout;
-	HWRM_PREP(req, VER_GET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VER_GET, BNXT_USE_CHIMP_MB);
 
 	req.hwrm_intf_maj = HWRM_VERSION_MAJOR;
 	req.hwrm_intf_min = HWRM_VERSION_MINOR;
@@ -1104,7 +1104,7 @@ int bnxt_hwrm_func_driver_unregister(struct bnxt *bp, uint32_t flags)
 	if (!(bp->flags & BNXT_FLAG_REGISTERED))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
 	req.flags = flags;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -1122,7 +1122,7 @@ static int bnxt_hwrm_port_phy_cfg(struct bnxt *bp, struct bnxt_link_info *conf)
 	struct hwrm_port_phy_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint32_t enables = 0;
 
-	HWRM_PREP(req, PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
 
 	if (conf->link_up) {
 		/* Setting Fixed Speed. But AutoNeg is ON, So disable it */
@@ -1186,7 +1186,7 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	struct hwrm_port_phy_qcfg_input req = {0};
 	struct hwrm_port_phy_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -1265,7 +1265,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	int i;
 
 get_rx_info:
-	HWRM_PREP(req, QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(dir);
 	/* HWRM Version >= 1.9.1 only if COS Classification is not required. */
@@ -1353,7 +1353,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 	struct rte_mempool *mb_pool;
 	uint16_t rx_buf_size;
 
-	HWRM_PREP(req, RING_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.page_tbl_addr = rte_cpu_to_le_64(ring->bd_dma);
 	req.fbo = rte_cpu_to_le_32(0);
@@ -1477,7 +1477,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 	struct hwrm_ring_free_input req = {.req_type = 0 };
 	struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_type = ring_type;
 	req.ring_id = rte_cpu_to_le_16(ring->fw_ring_id);
@@ -1525,7 +1525,7 @@ int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_alloc_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.cr = rte_cpu_to_le_16(bp->grp_info[idx].cp_fw_ring_id);
 	req.rr = rte_cpu_to_le_16(bp->grp_info[idx].rx_fw_ring_id);
@@ -1549,7 +1549,7 @@ int bnxt_hwrm_ring_grp_free(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_free_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_group_id = rte_cpu_to_le_16(bp->grp_info[idx].fw_grp_id);
 
@@ -1571,7 +1571,7 @@ int bnxt_hwrm_stat_clear(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 	if (cpr->hw_stats_ctx_id == (uint32_t)HWRM_NA_SIGNATURE)
 		return rc;
 
-	HWRM_PREP(req, STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1590,7 +1590,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_alloc_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.update_period_ms = rte_cpu_to_le_32(0);
 
@@ -1614,7 +1614,7 @@ int bnxt_hwrm_stat_ctx_free(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_free_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1648,7 +1648,7 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 skip_ring_grps:
 	vnic->mru = BNXT_VNIC_MRU(bp->eth_dev->data->mtu);
-	HWRM_PREP(req, VNIC_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_ALLOC, BNXT_USE_CHIMP_MB);
 
 	if (vnic->func_default)
 		req.flags =
@@ -1671,7 +1671,7 @@ static int bnxt_hwrm_vnic_plcmodes_qcfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_qcfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1704,7 +1704,7 @@ static int bnxt_hwrm_vnic_plcmodes_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.flags = rte_cpu_to_le_32(pmode->flags);
@@ -1743,7 +1743,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	if (rc)
 		return rc;
 
-	HWRM_PREP(req, VNIC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (BNXT_CHIP_THOR(bp)) {
 		int dflt_rxq = vnic->start_grp_id;
@@ -1847,7 +1847,7 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		PMD_DRV_LOG(DEBUG, "VNIC QCFG ID %d\n", vnic->fw_vnic_id);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_VNIC_QCFG_INPUT_ENABLES_VF_ID_VALID);
@@ -1890,7 +1890,7 @@ int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp,
 	struct hwrm_vnic_rss_cos_lb_ctx_alloc_output *resp =
 						bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -1919,7 +1919,7 @@ int _bnxt_hwrm_vnic_ctx_free(struct bnxt *bp,
 		PMD_DRV_LOG(DEBUG, "VNIC RSS Rule %x\n", vnic->rss_rule);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(ctx_idx);
 
@@ -1964,7 +1964,7 @@ int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_FREE, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1991,7 +1991,7 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
 	for (i = 0; i < nr_ctxs; i++) {
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -2029,7 +2029,7 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 	if (BNXT_CHIP_THOR(bp))
 		return bnxt_hwrm_vnic_rss_cfg_thor(bp, vnic);
 
-	HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 	req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
 	req.hash_mode_flags = vnic->hash_mode;
@@ -2062,7 +2062,7 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(
 			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT);
@@ -2103,7 +2103,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
 
 	if (enable) {
 		req.enables = rte_cpu_to_le_32(
@@ -2143,7 +2143,7 @@ int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, const uint8_t *mac_addr)
 	memcpy(req.dflt_mac_addr, mac_addr, sizeof(req.dflt_mac_addr));
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -2161,7 +2161,7 @@ int bnxt_hwrm_func_qstats_tx_drop(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2184,7 +2184,7 @@ int bnxt_hwrm_func_qstats(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2221,7 +2221,7 @@ int bnxt_hwrm_func_clr_stats(struct bnxt *bp, uint16_t fid)
 	struct hwrm_func_clr_stats_input req = {.req_type = 0};
 	struct hwrm_func_clr_stats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2928,7 +2928,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	uint16_t flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3037,7 +3037,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, int tx_rings)
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(enables);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3109,7 +3109,7 @@ static int reserve_resources_from_vf(struct bnxt *bp,
 	int rc;
 
 	/* Get the actual allocated values now */
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3147,7 +3147,7 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
 	int rc;
 
 	/* Check for zero MAC address */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3165,7 +3165,7 @@ static int update_pf_resource_max(struct bnxt *bp)
 	int rc;
 
 	/* And copy the allocated numbers into the pf struct */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3268,7 +3268,7 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
 	for (i = 0; i < num_vfs; i++) {
 		add_random_mac_if_needed(bp, &req, i);
 
-		HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 		req.flags = rte_cpu_to_le_32(bp->pf.vf_info[i].func_cfg_flags);
 		req.fid = rte_cpu_to_le_16(bp->pf.vf_info[i].fid);
 		rc = bnxt_hwrm_send_message(bp,
@@ -3324,7 +3324,7 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_EVB_MODE);
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_val = port;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3375,7 +3375,7 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_free_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
 
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_id = rte_cpu_to_be_16(port);
@@ -3394,7 +3394,7 @@ int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.flags = rte_cpu_to_le_32(flags);
@@ -3424,7 +3424,7 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
 	struct hwrm_func_buf_rgtr_input req = {.req_type = 0 };
 	struct hwrm_func_buf_rgtr_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
 
 	req.req_buf_num_pages = rte_cpu_to_le_16(1);
 	req.req_buf_page_size = rte_cpu_to_le_16(
@@ -3455,7 +3455,7 @@ int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp)
 	if (!(BNXT_PF(bp) && bp->pdev->max_vfs))
 		return 0;
 
-	HWRM_PREP(req, FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3471,7 +3471,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags);
@@ -3493,7 +3493,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_vf_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
@@ -3515,7 +3515,7 @@ int bnxt_hwrm_set_default_vlan(struct bnxt *bp, int vf, uint8_t is_vf)
 	uint32_t func_cfg_flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (is_vf) {
 		dflt_vlan = bp->pf.vf_info[vf].dflt_vlan;
@@ -3547,7 +3547,7 @@ int bnxt_hwrm_func_bw_cfg(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(enables);
@@ -3567,7 +3567,7 @@ int bnxt_hwrm_set_vf_vlan(struct bnxt *bp, int vf)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
@@ -3604,7 +3604,7 @@ int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3624,7 +3624,7 @@ int bnxt_hwrm_func_qcfg_vf_default_mac(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3648,7 +3648,7 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3668,7 +3668,7 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 	struct hwrm_stat_ctx_query_input req = {.req_type = 0};
 	struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cid);
 
@@ -3706,7 +3706,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp)
 	struct bnxt_pf_info *pf = &bp->pf;
 	int rc;
 
-	HWRM_PREP(req, PORT_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	req.tx_stat_host_addr = rte_cpu_to_le_64(bp->hw_tx_port_stats_map);
@@ -3731,7 +3731,7 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	    BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3751,7 +3751,7 @@ int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 	if (BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
 	req.port_id = bp->pf.port_id;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3793,7 +3793,7 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	if (!bp->num_leds || BNXT_VF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, PORT_LED_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_CFG, BNXT_USE_CHIMP_MB);
 
 	if (led_on) {
 		led_state = HWRM_PORT_LED_CFG_INPUT_LED0_STATE_BLINKALT;
@@ -3826,7 +3826,7 @@ int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
 	struct hwrm_nvm_get_dir_info_input req = {0};
 	struct hwrm_nvm_get_dir_info_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3869,7 +3869,7 @@ int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data)
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3903,7 +3903,7 @@ int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_READ, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_READ, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	req.offset = rte_cpu_to_le_32(offset);
@@ -3925,7 +3925,7 @@ int bnxt_hwrm_erase_nvram_directory(struct bnxt *bp, uint8_t index)
 	struct hwrm_nvm_erase_dir_entry_input req = {0};
 	struct hwrm_nvm_erase_dir_entry_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3958,7 +3958,7 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
 	}
 	memcpy(buf, data, data_len);
 
-	HWRM_PREP(req, NVM_WRITE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_WRITE, BNXT_USE_CHIMP_MB);
 
 	req.dir_type = rte_cpu_to_le_16(dir_type);
 	req.dir_ordinal = rte_cpu_to_le_16(dir_ordinal);
@@ -4009,7 +4009,7 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
 	int rc;
 
 	/* First query all VNIC ids */
-	HWRM_PREP(req, FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.vf_id = rte_cpu_to_le_16(bp->pf.first_vf_id + vf);
 	req.max_vnic_id_cnt = rte_cpu_to_le_32(bp->pf.total_vnics);
@@ -4091,7 +4091,7 @@ int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(
@@ -4166,7 +4166,7 @@ int bnxt_hwrm_set_em_filter(struct bnxt *bp,
 	if (filter->fw_em_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_em_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4238,7 +4238,7 @@ int bnxt_hwrm_clear_em_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
 	if (filter->fw_em_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
 
 	req.em_filter_id = rte_cpu_to_le_64(filter->fw_em_filter_id);
 
@@ -4266,7 +4266,7 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_ntuple_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4346,7 +4346,7 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ntuple_filter_id = rte_cpu_to_le_64(filter->fw_ntuple_filter_id);
 
@@ -4377,7 +4377,7 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		struct bnxt_rx_ring_info *rxr;
 		struct bnxt_cp_ring_info *cpr;
 
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -4509,7 +4509,7 @@ static int bnxt_hwrm_set_coal_params_thor(struct bnxt *bp,
 	uint16_t flags;
 	int rc;
 
-	HWRM_PREP(req, RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
 
@@ -4546,7 +4546,7 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS, BNXT_USE_CHIMP_MB);
 	req.ring_id = rte_cpu_to_le_16(ring_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4571,7 +4571,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 	    bp->ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT_SILENT();
 
@@ -4650,7 +4650,7 @@ int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables)
 	if (!ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(enables);
 
 	if (enables & HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_QP) {
@@ -4743,7 +4743,7 @@ int bnxt_hwrm_ext_port_qstats(struct bnxt *bp)
 	      bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS))
 		return 0;
 
-	HWRM_PREP(req, PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	if (bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS) {
@@ -4784,7 +4784,7 @@ bnxt_hwrm_tunnel_redirect(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4803,7 +4803,7 @@ bnxt_hwrm_tunnel_redirect_free(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4821,7 +4821,7 @@ int bnxt_hwrm_tunnel_redirect_query(struct bnxt *bp, uint32_t *type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4842,7 +4842,7 @@ int bnxt_hwrm_tunnel_redirect_info(struct bnxt *bp, uint8_t tun_type,
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	req.tunnel_type = tun_type;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4867,7 +4867,7 @@ int bnxt_hwrm_set_mac(struct bnxt *bp)
 	if (!BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_FUNC_VF_CFG_INPUT_ENABLES_DFLT_MAC_ADDR);
@@ -4900,7 +4900,7 @@ int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
 	if (!up && (bp->flags & BNXT_FLAG_FW_RESET))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
 
 	if (up)
 		req.flags =
@@ -4946,7 +4946,7 @@ int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp)
 		memset(info, 0, sizeof(*info));
 	}
 
-	HWRM_PREP(req, ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -5022,7 +5022,7 @@ int bnxt_hwrm_fw_reset(struct bnxt *bp)
 	if (!BNXT_PF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, FW_RESET, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_FW_RESET, BNXT_USE_KONG(bp));
 
 	req.embedded_proc_type =
 		HWRM_FW_RESET_INPUT_EMBEDDED_PROC_TYPE_CHIP;
@@ -5050,7 +5050,7 @@ int bnxt_hwrm_port_ts_query(struct bnxt *bp, uint8_t path, uint64_t *timestamp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
 
 	switch (path) {
 	case BNXT_PTP_FLAGS_PATH_TX:
@@ -5098,7 +5098,7 @@ int bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(struct bnxt *bp)
 		return 0;
 	}
 
-	HWRM_PREP(req, CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_KONG(bp));
 
 	HWRM_CHECK_RESULT();
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 03/34] net/bnxt: add truflow message handlers
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
                     ` (32 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough, Randy Schacher

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add bnxt message functions for truflow APIs

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 83 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h | 18 ++++++++++
 2 files changed, 101 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 79e4156..c8309ee 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -257,6 +257,89 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 
 #define HWRM_UNLOCK()		rte_spinlock_unlock(&bp->hwrm_lock)
 
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len)
+{
+	int rc = 0;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+	struct input *req = msg;
+	struct output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(req, msg_type, mailbox);
+
+	rc = bnxt_hwrm_send_message(bp, req, msg_len, mailbox);
+
+	HWRM_CHECK_RESULT();
+
+	if (resp_msg)
+		memcpy(resp_msg, resp, resp_len);
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len)
+{
+	int rc = 0;
+	struct hwrm_cfa_tflib_input req = { .req_type = 0 };
+	struct hwrm_cfa_tflib_output *resp = bp->hwrm_cmd_resp_addr;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+
+	if (msg_len > sizeof(req.tf_req))
+		return -ENOMEM;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(&req, HWRM_TF, mailbox);
+	/* Build request using the user supplied request payload.
+	 * TLV request size is checked at build time against HWRM
+	 * request max size, thus no checking required.
+	 */
+	req.tf_type = tf_type;
+	req.tf_subtype = tf_subtype;
+	memcpy(req.tf_req, msg, msg_len);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), mailbox);
+	HWRM_CHECK_RESULT();
+
+	/* Copy the resp to user provided response buffer */
+	if (response != NULL)
+		/* Post process response data. We need to copy only
+		 * the 'payload' as the HWRM data structure really is
+		 * HWRM header + msg header + payload and the TFLIB
+		 * only provided a payload place holder.
+		 */
+		if (response_len != 0) {
+			memcpy(response,
+			       resp->tf_resp,
+			       response_len);
+		}
+
+	/* Extract the internal tflib response code */
+	*tf_response_code = resp->tf_resp_code;
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 {
 	int rc = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 5eb2ee8..df7aa74 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -69,6 +69,24 @@ HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED
 	bp->rx_cos_queue[x].profile =	\
 		resp->queue_id##x##_service_profile
 
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len);
+
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len);
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp,
 				   struct bnxt_vnic_info *vnic);
 int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 04/34] net/bnxt: add initial tf core session open
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (2 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
                     ` (31 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add infrastructure support
- Add tf_core open session support

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                |   8 +
 drivers/net/bnxt/bnxt.h                  |   4 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       | 971 +++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c       | 145 +++++
 drivers/net/bnxt/tf_core/tf_core.h       | 347 +++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        |  79 +++
 drivers/net/bnxt/tf_core/tf_msg.h        |  44 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h |  47 ++
 drivers/net/bnxt/tf_core/tf_project.h    |  24 +
 drivers/net/bnxt/tf_core/tf_resources.h  |  46 ++
 drivers/net/bnxt/tf_core/tf_rm.h         |  33 ++
 drivers/net/bnxt/tf_core/tf_session.h    |  85 +++
 drivers/net/bnxt/tf_core/tfp.c           | 163 ++++++
 drivers/net/bnxt/tf_core/tfp.h           | 188 ++++++
 14 files changed, 2184 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index b77532b..8a68059 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -43,6 +43,14 @@ ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+endif
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index b795ed6..a8e57ca 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -21,6 +21,8 @@
 #include "bnxt_cpr.h"
 #include "bnxt_util.h"
 
+#include "tf_core.h"
+
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
 
@@ -679,6 +681,8 @@ struct bnxt {
 /* TCAM and EM should be 16-bit only. Other modes not supported. */
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
+
+	struct tf               tfp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
new file mode 100644
index 0000000..a8a5547
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -0,0 +1,971 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _HWRM_TF_H_
+#define _HWRM_TF_H_
+
+#include "tf_core.h"
+
+typedef enum tf_type {
+	TF_TYPE_TRUFLOW,
+	TF_TYPE_LAST = TF_TYPE_TRUFLOW,
+} tf_type_t;
+
+typedef enum tf_subtype {
+	HWRM_TFT_SESSION_ATTACH = 712,
+	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
+	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
+	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
+	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
+	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
+	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
+	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
+	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
+	HWRM_TFT_TBL_SCOPE_CFG = 731,
+	HWRM_TFT_EM_RULE_INSERT = 739,
+	HWRM_TFT_EM_RULE_DELETE = 740,
+	HWRM_TFT_REG_GET = 821,
+	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_TBL_TYPE_SET = 823,
+	HWRM_TFT_TBL_TYPE_GET = 824,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+} tf_subtype_t;
+
+/* Request and Response compile time checking */
+/* u32_t	tlv_req_value[26]; */
+#define TF_MAX_REQ_SIZE 104
+/* u32_t	tlv_resp_value[170]; */
+#define TF_MAX_RESP_SIZE 680
+#define BUILD_BUG_ON(condition) typedef char p__LINE__[(condition) ? 1 : -1]
+
+/* Use this to allocate/free any kind of
+ * indexes over HWRM and fill the parms pointer
+ */
+#define TF_BULK_RECV	 128
+#define TF_BULK_SEND	  16
+
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_INSERT_KEY_DATA 0x2e30UL
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_DELETE_KEY_DATA 0x2e40UL
+/* L2 Context DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_DMA_ADDR		0x2fe0UL
+/* L2 Context Entry */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY		0x2fe1UL
+/* Prof tcam DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_DMA_ADDR		0x3030UL
+/* Prof tcam Entry */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY		0x3031UL
+/* WC DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
+/* WC Entry */
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+/* Action Data */
+#define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
+#define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
+
+#define TF_BITS2BYTES(x) (((x) + 7) >> 3)
+#define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
+
+struct tf_session_attach_input;
+struct tf_session_hw_resc_qcaps_input;
+struct tf_session_hw_resc_qcaps_output;
+struct tf_session_hw_resc_alloc_input;
+struct tf_session_hw_resc_alloc_output;
+struct tf_session_hw_resc_free_input;
+struct tf_session_hw_resc_flush_input;
+struct tf_session_sram_resc_qcaps_input;
+struct tf_session_sram_resc_qcaps_output;
+struct tf_session_sram_resc_alloc_input;
+struct tf_session_sram_resc_alloc_output;
+struct tf_session_sram_resc_free_input;
+struct tf_session_sram_resc_flush_input;
+struct tf_tbl_type_set_input;
+struct tf_tbl_type_get_input;
+struct tf_tbl_type_get_output;
+struct tf_em_internal_insert_input;
+struct tf_em_internal_insert_output;
+struct tf_em_internal_delete_input;
+/* Input params for session attach */
+typedef struct tf_session_attach_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* Session Name */
+	char				 session_name[TF_SESSION_NAME_MAX];
+} tf_session_attach_input_t, *ptf_session_attach_input_t;
+BUILD_BUG_ON(sizeof(tf_session_attach_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_output {
+	/* Control Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Unused */
+	uint8_t			  unused[4];
+	/* Minimum guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_min;
+	/* Maximum non-guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_max;
+	/* Minimum guaranteed number of profile functions */
+	uint16_t			 prof_func_min;
+	/* Maximum non-guaranteed number of profile functions */
+	uint16_t			 prof_func_max;
+	/* Minimum guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_min;
+	/* Maximum non-guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_max;
+	/* Minimum guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_min;
+	/* Maximum non-guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_max;
+	/* Minimum guaranteed number of EM records entries */
+	uint16_t			 em_record_entries_min;
+	/* Maximum non-guaranteed number of EM record entries */
+	uint16_t			 em_record_entries_max;
+	/* Minimum guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_min;
+	/* Maximum non-guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_max;
+	/* Minimum guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_min;
+	/* Maximum non-guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_max;
+	/* Minimum guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_min;
+	/* Maximum non-guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_max;
+	/* Minimum guaranteed number of meter instances */
+	uint16_t			 meter_inst_min;
+	/* Maximum non-guaranteed number of meter instances */
+	uint16_t			 meter_inst_max;
+	/* Minimum guaranteed number of mirrors */
+	uint16_t			 mirrors_min;
+	/* Maximum non-guaranteed number of mirrors */
+	uint16_t			 mirrors_max;
+	/* Minimum guaranteed number of UPAR */
+	uint16_t			 upar_min;
+	/* Maximum non-guaranteed number of UPAR */
+	uint16_t			 upar_max;
+	/* Minimum guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_min;
+	/* Maximum non-guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_max;
+	/* Minimum guaranteed number of L2 Functions */
+	uint16_t			 l2_func_min;
+	/* Maximum non-guaranteed number of L2 Functions */
+	uint16_t			 l2_func_max;
+	/* Minimum guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_min;
+	/* Maximum non-guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_max;
+	/* Minimum guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_min;
+	/* Maximum non-guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_max;
+	/* Minimum guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_min;
+	/* Maximum non-guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_max;
+	/* Minimum guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_min;
+	/* Maximum non-guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_max;
+	/* Minimum guaranteed number of metadata */
+	uint16_t			 metadata_min;
+	/* Maximum non-guaranteed number of metadata */
+	uint16_t			 metadata_max;
+	/* Minimum guaranteed number of CT states */
+	uint16_t			 ct_state_min;
+	/* Maximum non-guaranteed number of CT states */
+	uint16_t			 ct_state_max;
+	/* Minimum guaranteed number of range profiles */
+	uint16_t			 range_prof_min;
+	/* Maximum non-guaranteed number range profiles */
+	uint16_t			 range_prof_max;
+	/* Minimum guaranteed number of range entries */
+	uint16_t			 range_entries_min;
+	/* Maximum non-guaranteed number of range entries */
+	uint16_t			 range_entries_max;
+	/* Minimum guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_min;
+	/* Maximum non-guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_max;
+} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of L2 CTX TCAM entries to be allocated */
+	uint16_t			 num_l2_ctx_tcam_entries;
+	/* Number of profile functions to be allocated */
+	uint16_t			 num_prof_func_entries;
+	/* Number of profile TCAM entries to be allocated */
+	uint16_t			 num_prof_tcam_entries;
+	/* Number of EM profile ids to be allocated */
+	uint16_t			 num_em_prof_id;
+	/* Number of EM records entries to be allocated */
+	uint16_t			 num_em_record_entries;
+	/* Number of WC profiles ids to be allocated */
+	uint16_t			 num_wc_tcam_prof_id;
+	/* Number of WC TCAM entries to be allocated */
+	uint16_t			 num_wc_tcam_entries;
+	/* Number of meter profiles to be allocated */
+	uint16_t			 num_meter_profiles;
+	/* Number of meter instances to be allocated */
+	uint16_t			 num_meter_inst;
+	/* Number of mirrors to be allocated */
+	uint16_t			 num_mirrors;
+	/* Number of UPAR to be allocated */
+	uint16_t			 num_upar;
+	/* Number of SP TCAM entries to be allocated */
+	uint16_t			 num_sp_tcam_entries;
+	/* Number of L2 functions to be allocated */
+	uint16_t			 num_l2_func;
+	/* Number of flexible key templates to be allocated */
+	uint16_t			 num_flex_key_templ;
+	/* Number of table scopes to be allocated */
+	uint16_t			 num_tbl_scope;
+	/* Number of epoch0 entries to be allocated */
+	uint16_t			 num_epoch0_entries;
+	/* Number of epoch1 entries to be allocated */
+	uint16_t			 num_epoch1_entries;
+	/* Number of metadata to be allocated */
+	uint16_t			 num_metadata;
+	/* Number of CT states to be allocated */
+	uint16_t			 num_ct_state;
+	/* Number of range profiles to be allocated */
+	uint16_t			 num_range_prof;
+	/* Number of range Entries to be allocated */
+	uint16_t			 num_range_entries;
+	/* Number of LAG table entries to be allocated */
+	uint16_t			 num_lag_tbl_entries;
+} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_output {
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW free */
+typedef struct tf_session_hw_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW flush */
+typedef struct tf_session_hw_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_output {
+	/* Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Minimum guaranteed number of Full Action */
+	uint16_t			 full_action_min;
+	/* Maximum non-guaranteed number of Full Action */
+	uint16_t			 full_action_max;
+	/* Minimum guaranteed number of MCG */
+	uint16_t			 mcg_min;
+	/* Maximum non-guaranteed number of MCG */
+	uint16_t			 mcg_max;
+	/* Minimum guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_min;
+	/* Maximum non-guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_max;
+	/* Minimum guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_min;
+	/* Maximum non-guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_max;
+	/* Minimum guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_min;
+	/* Maximum non-guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_max;
+	/* Minimum guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_min;
+	/* Maximum non-guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_max;
+	/* Minimum guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_max;
+	/* Minimum guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_max;
+	/* Minimum guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_min;
+	/* Maximum non-guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_max;
+	/* Minimum guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_min;
+	/* Maximum non-guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_max;
+	/* Minimum guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_min;
+	/* Maximum non-guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_max;
+	/* Minimum guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_min;
+	/* Maximum non-guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_max;
+	/* Minimum guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_min;
+	/* Maximum non-guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_max;
+} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_input {
+	/* FW Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of full action SRAM entries to be allocated */
+	uint16_t			 num_full_action;
+	/* Number of multicast groups to be allocated */
+	uint16_t			 num_mcg;
+	/* Number of Encap 8B entries to be allocated */
+	uint16_t			 num_encap_8b;
+	/* Number of Encap 16B entries to be allocated */
+	uint16_t			 num_encap_16b;
+	/* Number of Encap 64B entries to be allocated */
+	uint16_t			 num_encap_64b;
+	/* Number of SP SMAC entries to be allocated */
+	uint16_t			 num_sp_smac;
+	/* Number of SP SMAC IPv4 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv4;
+	/* Number of SP SMAC IPv6 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv6;
+	/* Number of Counter 64B entries to be allocated */
+	uint16_t			 num_counter_64b;
+	/* Number of NAT source ports to be allocated */
+	uint16_t			 num_nat_sport;
+	/* Number of NAT destination ports to be allocated */
+	uint16_t			 num_nat_dport;
+	/* Number of NAT source iPV4 addresses to be allocated */
+	uint16_t			 num_nat_s_ipv4;
+	/* Number of NAT destination IPV4 addresses to be allocated */
+	uint16_t			 num_nat_d_ipv4;
+} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_output {
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM free */
+typedef struct tf_session_sram_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM flush */
+typedef struct tf_session_sram_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Index to get */
+	uint32_t			 index;
+} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_output {
+	/* Size of the data read in bytes */
+	uint16_t			 size;
+	/* Data read */
+	uint8_t			  data[TF_BULK_RECV];
+} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM internal rule insert */
+typedef struct tf_em_internal_insert_input {
+	/* Firmware Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* strength */
+	uint16_t			 strength;
+	/* index to action */
+	uint32_t			 action_ptr;
+	/* index of em record */
+	uint32_t			 em_record_idx;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for EM internal rule insert */
+typedef struct tf_em_internal_insert_output {
+	/* EM record pointer index */
+	uint16_t			 rptr_index;
+	/* EM record offset 0~3 */
+	uint8_t			  rptr_entry;
+} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_input {
+	/* Session Id */
+	uint32_t			 tf_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* EM internal flow hanndle */
+	uint64_t			 flow_handle;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_delete_input_t) <= TF_MAX_REQ_SIZE);
+
+#endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
new file mode 100644
index 0000000..6bafae5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+int
+tf_open_session(struct tf                    *tfp,
+		struct tf_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tfp_calloc_parms alloc_parms;
+	unsigned int domain, bus, slot, device;
+	uint8_t fw_session_id;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH)
+		return -ENOTSUP;
+
+	/* Build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			PMD_DRV_LOG(ERR,
+				    "Session is already open, rc:%d\n",
+				    rc);
+		else
+			PMD_DRV_LOG(ERR,
+				    "Open message send failed, rc:%d\n",
+				    rc);
+
+		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session_info);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+
+	/* Allocate core data for the session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session->core_data = alloc_parms.mem_va;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	/* Initialize Session */
+	session->device_type = parms->device_type;
+
+	/* Construct the Session ID */
+	session->session_id.internal.domain = domain;
+	session->session_id.internal.bus = bus;
+	session->session_id.internal.device = device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%d\n",
+			    rc);
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	/* Return session ID */
+	parms->session_id = session->session_id;
+
+	PMD_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
new file mode 100644
index 0000000..69433ac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -0,0 +1,347 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_CORE_H_
+#define _TF_CORE_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdio.h>
+
+#include "tf_project.h"
+
+/**
+ * @file
+ *
+ * Truflow Core API Header File
+ */
+
+/********** BEGIN Truflow Core DEFINITIONS **********/
+
+/**
+ * direction
+ */
+enum tf_dir {
+	TF_DIR_RX,  /**< Receive */
+	TF_DIR_TX,  /**< Transmit */
+	TF_DIR_MAX
+};
+
+/********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
+
+/**
+ * @page general General
+ *
+ * @ref tf_open_session
+ *
+ * @ref tf_attach_session
+ *
+ * @ref tf_close_session
+ */
+
+
+/** Session Version defines
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ */
+#define TF_SESSION_VER_MAJOR  1   /**< Major Version */
+#define TF_SESSION_VER_MINOR  0   /**< Minor Version */
+#define TF_SESSION_VER_UPDATE 0   /**< Update Version */
+
+/** Session Name
+ *
+ * Name of the TruFlow control channel interface.  Expects
+ * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
+ */
+#define TF_SESSION_NAME_MAX       64
+
+#define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Sesssion ID define */
+
+/** Session Identifier
+ *
+ * Unique session identifier which includes PCIe bus info to
+ * distinguish the PF and session info to identify the associated
+ * TruFlow session. Session ID is constructed from the passed in
+ * ctrl_chan_name in tf_open_session() together with an allocated
+ * fw_session_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_id {
+	uint32_t id;
+	struct {
+		uint8_t domain;
+		uint8_t bus;
+		uint8_t device;
+		uint8_t fw_session_id;
+	} internal;
+};
+
+/** Session Version
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ *
+ * Please see the TF_VER_MAJOR/MINOR and UPDATE defines.
+ */
+struct tf_session_version {
+	uint8_t major;
+	uint8_t minor;
+	uint8_t update;
+};
+
+/** Session supported device types
+ *
+ */
+enum tf_device_type {
+	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
+	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_MAX     /**< Maximum   */
+};
+
+/** TruFlow Session Information
+ *
+ * Structure defining a TruFlow Session, also known as a Management
+ * session. This structure is initialized at time of
+ * tf_open_session(). It is passed to all of the TruFlow APIs as way
+ * to prescribe and isolate resources between different TruFlow ULP
+ * Applications.
+ */
+struct tf_session_info {
+	/**
+	 * TrueFlow Version. Used to control the structure layout when
+	 * sharing sessions. No guarantee that a secondary process
+	 * would come from the same version of an executable.
+	 * TruFlow initializes this variable on tf_open_session().
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	struct tf_session_version ver;
+	/**
+	 * will be STAILQ_ENTRY(tf_session_info) next
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	void                 *next;
+	/**
+	 * Session ID is a unique identifier for the session. TruFlow
+	 * initializes this variable during tf_open_session()
+	 * processing.
+	 *
+	 * Owner:  TruFlow
+	 * Access: Truflow & ULP
+	 */
+	union tf_session_id   session_id;
+	/**
+	 * Protects access to core_data. Lock is initialized and owned
+	 * by ULP. TruFlow can access the core_data without checking
+	 * the lock.
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	uint8_t               spin_lock;
+	/**
+	 * The core_data holds the TruFlow tf_session data
+	 * structure. This memory is allocated and owned by TruFlow on
+	 * tf_open_session().
+	 *
+	 * TruFlow uses this memory for session management control
+	 * until the session is closed by ULP. Access control is done
+	 * by the spin_lock which ULP controls ahead of TruFlow API
+	 * calls.
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	void                 *core_data;
+	/**
+	 * The core_data_sz_bytes specifies the size of core_data in
+	 * bytes.
+	 *
+	 * The size is set by TruFlow on tf_open_session().
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	uint32_t              core_data_sz_bytes;
+};
+
+/** TruFlow handle
+ *
+ * Contains a pointer to the session info. Allocated by ULP and passed
+ * to TruFlow using tf_open_session(). TruFlow will populate the
+ * session info at that time. Additional 'opens' can be done using
+ * same session_info by using tf_attach_session().
+ *
+ * It is expected that ULP allocates this memory as shared memory.
+ *
+ * NOTE: This struct must be within the BNXT PMD struct bnxt
+ *       (bp). This allows use of container_of() to get access to the PMD.
+ */
+struct tf {
+	struct tf_session_info *session;
+};
+
+
+/**
+ * tf_open_session parameters definition.
+ */
+struct tf_open_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	/** [in] shadow_copy
+	 *
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow to keep track of
+	 * resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+	/** [in/out] session_id
+	 *
+	 * Session_id is unique per session.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id.
+	 *
+	 * The session_id allows a session to be shared between devices.
+	 */
+	union tf_session_id session_id;
+	/** [in] device type
+	 *
+	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 */
+	enum tf_device_type device_type;
+};
+
+/**
+ * Opens a new TruFlow management session.
+ *
+ * TruFlow will allocate session specific memory, shared memory, to
+ * hold its session data. This data is private to TruFlow.
+ *
+ * Multiple PFs can share the same session. An association, refcount,
+ * between session and PFs is maintained within TruFlow. Thus, a PF
+ * can attach to an existing session, see tf_attach_session().
+ *
+ * No other TruFlow APIs will succeed unless this API is first called and
+ * succeeds.
+ *
+ * tf_open_session() returns a session id that can be used on attach.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ * [in] parms
+ *   Pointer to open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_open_session(struct tf *tfp,
+		    struct tf_open_session_parms *parms);
+
+struct tf_attach_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] attach_chan_name
+	 *
+	 * String containing name of attach channel interface to be
+	 * used for this session.
+	 *
+	 * The attach_chan_name must be given to a 2nd process after
+	 * the primary process has been created. This is the
+	 * ctrl_chan_name of the primary process and is used to find
+	 * the shared memory for the session that the attach is going
+	 * to use.
+	 */
+	char attach_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] session_id
+	 *
+	 * Session_id is unique per session. For Attach the session_id
+	 * should be the session_id that was returned on the first
+	 * open.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id
+	 * during tf_open_session().
+	 *
+	 * A reference count will be incremented on attach. A session
+	 * is first fully closed when reference count is zero by
+	 * calling tf_close_session().
+	 */
+	union tf_session_id session_id;
+};
+
+/**
+ * Attaches to an existing session. Used when more than one PF wants
+ * to share a single session. In that case all TruFlow management
+ * traffic will be sent to the TruFlow firmware using the 'PF' that
+ * did the attach not the session ctrl channel.
+ *
+ * Attach will increment a ref count as to manage the shared session data.
+ *
+ * [in] tfp, pointer to TF handle
+ * [in] parms, pointer to attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_attach_session(struct tf *tfp,
+		      struct tf_attach_session_parms *parms);
+
+/**
+ * Closes an existing session. Cleans up all hardware and firmware
+ * state associated with the TruFlow application session when the last
+ * PF associated with the session results in refcount to be zero.
+ *
+ * Returns success or failure code.
+ */
+int tf_close_session(struct tf *tfp);
+
+#endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
new file mode 100644
index 0000000..2b68681
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tfp.h"
+
+#include "tf_msg_common.h"
+#include "tf_msg.h"
+#include "hsi_struct_def_dpdk.h"
+#include "hwrm_tf.h"
+
+/**
+ * Sends session open request to TF Firmware
+ */
+int
+tf_msg_session_open(struct tf *tfp,
+		    char *ctrl_chan_name,
+		    uint8_t *fw_session_id)
+{
+	int rc;
+	struct hwrm_tf_session_open_input req = { 0 };
+	struct hwrm_tf_session_open_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_OPEN;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_id = resp.fw_session_id;
+
+	return rc;
+}
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int
+tf_msg_session_qcfg(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_QCFG,
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
new file mode 100644
index 0000000..20ebf2e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_H_
+#define _TF_MSG_H_
+
+#include "tf_rm.h"
+
+struct tf;
+
+/**
+ * Sends session open request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_id
+ *   Pointer to the fw_session_id that is allocated on firmware side
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_open(struct tf *tfp,
+			char *ctrl_chan_name,
+			uint8_t *fw_session_id);
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int tf_msg_session_qcfg(struct tf *tfp);
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_query *hw_query);
+
+#endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg_common.h b/drivers/net/bnxt/tf_core/tf_msg_common.h
new file mode 100644
index 0000000..7a4e825
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg_common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_COMMON_H_
+#define _TF_MSG_COMMON_H_
+
+/* Communication Mailboxes */
+#define TF_CHIMP_MB 0
+#define TF_KONG_MB  1
+
+/* Helper to fill in the parms structure */
+#define MSG_PREP(parms, mb, type, subtype, req, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_REQ(parms, mb, type, subtype, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size  = 0;				\
+		parms.req_data  = NULL;				\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_RESP(parms, mb, type, subtype, req) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = 0;				\
+		parms.resp_data = NULL;				\
+	} while (0)
+
+#endif /* _TF_MSG_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_project.h b/drivers/net/bnxt/tf_core/tf_project.h
new file mode 100644
index 0000000..ab5f113
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_project.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_PROJECT_H_
+#define _TF_PROJECT_H_
+
+/* Wh+ support enabled */
+#ifndef TF_SUPPORT_P4
+#define TF_SUPPORT_P4 1
+#endif
+
+/* Shadow DB Support */
+#ifndef TF_SHADOW
+#define TF_SHADOW 0
+#endif
+
+/* Shared memory for session */
+#ifndef TF_SHARED
+#define TF_SHARED 0
+#endif
+
+#endif /* _TF_PROJECT_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
new file mode 100644
index 0000000..160abac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_RESOURCES_H_
+#define _TF_RESOURCES_H_
+
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/** HW Resource types
+ */
+enum tf_resource_type_hw {
+	/* Common HW resources for all chip variants */
+	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+	TF_RESC_TYPE_HW_PROF_FUNC,
+	TF_RESC_TYPE_HW_PROF_TCAM,
+	TF_RESC_TYPE_HW_EM_PROF_ID,
+	TF_RESC_TYPE_HW_EM_REC,
+	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	TF_RESC_TYPE_HW_WC_TCAM,
+	TF_RESC_TYPE_HW_METER_PROF,
+	TF_RESC_TYPE_HW_METER_INST,
+	TF_RESC_TYPE_HW_MIRROR,
+	TF_RESC_TYPE_HW_UPAR,
+	/* Wh+/Brd2 specific HW resources */
+	TF_RESC_TYPE_HW_SP_TCAM,
+	/* Brd2/Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_L2_FUNC,
+	/* Brd3, Brd4 common HW resources */
+	TF_RESC_TYPE_HW_FKB,
+	/* Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_TBL_SCOPE,
+	TF_RESC_TYPE_HW_EPOCH0,
+	TF_RESC_TYPE_HW_EPOCH1,
+	TF_RESC_TYPE_HW_METADATA,
+	TF_RESC_TYPE_HW_CT_STATE,
+	TF_RESC_TYPE_HW_RANGE_PROF,
+	TF_RESC_TYPE_HW_RANGE_ENTRY,
+	TF_RESC_TYPE_HW_LAG_ENTRY,
+	TF_RESC_TYPE_HW_MAX
+};
+#endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
new file mode 100644
index 0000000..5164d6b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_resources.h"
+#include "tf_core.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * Resource query single entry
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Resource query array of HW entities
+ */
+struct tf_rm_hw_query {
+	/** array of HW resource entries */
+	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+};
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
new file mode 100644
index 0000000..c30ebbe
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SESSION_H_
+#define _TF_SESSION_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "tf_core.h"
+#include "tf_rm.h"
+
+/** Session defines
+ */
+#define TF_SESSIONS_MAX	          1          /** max # sessions */
+#define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
+
+/** Session
+ *
+ * Shared memory containing private TruFlow session information.
+ * Through this structure the session can keep track of resource
+ * allocations and (if so configured) any shadow copy of flow
+ * information.
+ *
+ * Memory is assigned to the Truflow instance by way of
+ * tf_open_session. Memory is allocated and owned by i.e. ULP.
+ *
+ * Access control to this shared memory is handled by the spin_lock in
+ * tf_session_info.
+ */
+struct tf_session {
+	/** TrueFlow Version. Used to control the structure layout
+	 * when sharing sessions. No guarantee that a secondary
+	 * process would come from the same version of an executable.
+	 */
+	struct tf_session_version ver;
+
+	/** Device type, provided by tf_open_session().
+	 */
+	enum tf_device_type device_type;
+
+	/** Session ID, allocated by FW on tf_open_session().
+	 */
+	union tf_session_id session_id;
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow Core to keep track
+	 * of resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+
+	/** 
+	 * Session Reference Count. To keep track of functions per
+	 * session the ref_count is incremented. There is also a
+	 * parallel TruFlow Firmware ref_count in case the TruFlow
+	 * Core goes away without informing the Firmware.
+	 */
+	uint8_t ref_count;
+
+	/** CRC32 seed table */
+#define TF_LKUP_SEED_MEM_SIZE 512
+	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+	/** Lookup3 init values */
+	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+};
+#endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
new file mode 100644
index 0000000..fb5c297
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -0,0 +1,163 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * see the individual elements.
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "tf_core.h"
+#include "tfp.h"
+#include "bnxt.h"
+#include "bnxt_hwrm.h"
+#include "tf_msg_common.h"
+
+/**
+ * Sends TruFlow msg to the TruFlow Firmware using
+ * a message specific HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_direct(struct tf *tfp,
+		    struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_direct(container_of(tfp,
+					       struct bnxt,
+					       tfp),
+					 use_kong_mb,
+					 parms->tf_type,
+					 parms->req_data,
+					 parms->req_size,
+					 parms->resp_data,
+					 parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Sends preformatted TruFlow msg to the TruFlow Firmware using
+ * the Truflow tunnel HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_tunneled(struct tf *tfp,
+		      struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_tunneled(container_of(tfp,
+						  struct bnxt,
+						  tfp),
+					   use_kong_mb,
+					   parms->tf_type,
+					   parms->tf_subtype,
+					   &parms->tf_resp_code,
+					   parms->req_data,
+					   parms->req_size,
+					   parms->resp_data,
+					   parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_calloc(struct tfp_calloc_parms *parms)
+{
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("tf",
+				    (parms->nitems * parms->size),
+				    parms->alignment);
+	if (parms->mem_va == NULL) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	parms->mem_pa = (void *)rte_mem_virt2iova(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/**
+ * Frees the memory space pointed to by the provided pointer. The
+ * pointer must have been returned from the tfp_calloc().
+ */
+void
+tfp_free(void *addr)
+{
+	rte_free(addr);
+}
+
+/**
+ * Copies n bytes from src memory to dest memory. The memory areas
+ * must not overlap.
+ */
+void
+tfp_memcpy(void *dest, void *src, size_t n)
+{
+	rte_memcpy(dest, src, n);
+}
+
+/**
+ * Used to initialize portable spin lock
+ */
+void
+tfp_spinlock_init(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_init(&parms->slock);
+}
+
+/**
+ * Used to lock portable spin lock
+ */
+void
+tfp_spinlock_lock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_lock(&parms->slock);
+}
+
+/**
+ * Used to unlock portable spin lock
+ */
+void
+tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_unlock(&parms->slock);
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
new file mode 100644
index 0000000..8d5e94e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* This header file defines the Portability structures and APIs for
+ * TruFlow.
+ */
+
+#ifndef _TFP_H_
+#define _TFP_H_
+
+#include <rte_spinlock.h>
+
+/** Spinlock
+ */
+struct tfp_spinlock_parms {
+	rte_spinlock_t slock;
+};
+
+/**
+ * @file
+ *
+ * TrueFlow Portability API Header File
+ */
+
+/** send message parameter definition
+ */
+struct tfp_send_msg_parms {
+	/**
+	 * [in] mailbox, specifying the Mailbox to send the command on.
+	 */
+	uint32_t  mailbox;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_type.
+	 */
+	uint16_t  tf_type;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_subtype.
+	 */
+	uint16_t  tf_subtype;
+	/**
+	 * [out] tf_resp_code, response code from the internal tlv
+	 *       message. Only supported on tunneled messages.
+	 */
+	uint32_t tf_resp_code;
+	/**
+	 * [out] size, number specifying the request size of the data in bytes
+	 */
+	uint32_t req_size;
+	/**
+	 * [in] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *req_data;
+	/**
+	 * [out] size, number specifying the response size of the data in bytes
+	 */
+	uint32_t resp_size;
+	/**
+	 * [out] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *resp_data;
+};
+
+/** calloc parameter definition
+ */
+struct tfp_calloc_parms {
+	/**
+	 * [in] nitems, number specifying number of items to allocate.
+	 */
+	size_t nitems;
+	/**
+	 * [in] size, number specifying the size of each memory item
+	 *      requested. Size is in bytes.
+	 */
+	size_t size;
+	/**
+	 * [in] alignment, number indicates byte alignment required. 0
+	 *      - don't care, 16 - 16 byte alignment, 4K - 4K alignment etc
+	 */
+	size_t alignment;
+	/**
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/**
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+};
+
+/**
+ * @page Portability
+ *
+ * @ref tfp_send_direct
+ * @ref tfp_send_msg_tunneled
+ *
+ * @ref tfp_calloc
+ * @ref tfp_free
+ * @ref tfp_memcpy
+ *
+ * @ref tfp_spinlock_init
+ * @ref tfp_spinlock_lock
+ * @ref tfp_spinlock_unlock
+ *
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_direct(struct tf *tfp,
+			struct tfp_send_msg_parms *parms);
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_tunneled(struct tf                 *tfp,
+			  struct tfp_send_msg_parms *parms);
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * NOTE: Also performs virt2phy address conversion by default thus is
+ * can be expensive to invoke.
+ *
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -ENOMEM        - No memory available
+ *   -EINVAL        - Parameter error
+ */
+int tfp_calloc(struct tfp_calloc_parms *parms);
+
+void tfp_free(void *addr);
+void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+#endif /* _TFP_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 05/34] net/bnxt: add initial tf core session close support
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (3 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
                     ` (30 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session and resource support functions
- Add Truflow session close API and related message support functions
  for both session and hw resources

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/bitalloc.c     | 364 +++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h     | 119 ++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |  86 +++++++
 drivers/net/bnxt/tf_core/tf_msg.c       | 401 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  42 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |  24 +-
 drivers/net/bnxt/tf_core/tf_rm.h        | 113 +++++++++
 drivers/net/bnxt/tf_core/tf_session.h   |   1 +
 9 files changed, 1146 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 8a68059..8474673 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -48,6 +48,7 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
new file mode 100644
index 0000000..fb4df9a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bitalloc.h"
+
+#define BITALLOC_MAX_LEVELS 6
+
+/* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
+static int
+ba_ffs(bitalloc_word_t v)
+{
+	int c; /* c will be the number of zero bits on the right plus 1 */
+
+	v &= -v;
+	c = v ? 32 : 0;
+
+	if (v & 0x0000FFFF)
+		c -= 16;
+	if (v & 0x00FF00FF)
+		c -= 8;
+	if (v & 0x0F0F0F0F)
+		c -= 4;
+	if (v & 0x33333333)
+		c -= 2;
+	if (v & 0x55555555)
+		c -= 1;
+
+	return c;
+}
+
+int
+ba_init(struct bitalloc *pool, int size)
+{
+	bitalloc_word_t *mem = (bitalloc_word_t *)pool;
+	int       i;
+
+	/* Initialize */
+	pool->size = 0;
+
+	if (size < 1 || size > BITALLOC_MAX_SIZE)
+		return -1;
+
+	/* Zero structure */
+	for (i = 0;
+	     i < (int)(BITALLOC_SIZEOF(size) / sizeof(bitalloc_word_t));
+	     i++)
+		mem[i] = 0;
+
+	/* Initialize */
+	pool->size = size;
+
+	/* Embed number of words of next level, after each level */
+	int words[BITALLOC_MAX_LEVELS];
+	int lev = 0;
+	int offset = 0;
+
+	words[0] = (size + 31) / 32;
+	while (words[lev] > 1) {
+		lev++;
+		words[lev] = (words[lev - 1] + 31) / 32;
+	}
+
+	while (lev) {
+		offset += words[lev];
+		pool->storage[offset++] = words[--lev];
+	}
+
+	/* Free the entire pool */
+	for (i = 0; i < size; i++)
+		ba_free(pool, i);
+
+	return 0;
+}
+
+static int
+ba_alloc_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int              index,
+		int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc = ba_ffs(storage[index]);
+	int       r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index * 32 + loc,
+				    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
+}
+
+static int
+ba_alloc_index_helper(struct bitalloc *pool,
+		      int              offset,
+		      int              words,
+		      unsigned int     size,
+		      int             *index,
+		      int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_alloc_index_helper(pool,
+					  offset + words + 1,
+					  storage[words],
+					  size * 32,
+					  index,
+					  clear);
+	else
+		r = 1; /* Check if already allocated */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? 0 : -1;
+		if (r == 0) {
+			*clear = 1;
+			pool->free_count--;
+		}
+	}
+
+	if (*clear) {
+		storage[*index] &= ~(1 << loc);
+		*clear = (storage[*index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_index(struct bitalloc *pool, int index)
+{
+	int clear = 0;
+	int index_copy = index;
+
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0)
+		return index;
+	else
+		return -1;
+}
+
+static int
+ba_inuse_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_inuse_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index);
+	else
+		r = 1; /* Check if in use */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1)
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+
+	return r;
+}
+
+int
+ba_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_inuse_helper(pool, 0, 1, 32, &index) == 0;
+}
+
+static int
+ba_free_helper(struct bitalloc *pool,
+	       int              offset,
+	       int              words,
+	       unsigned int     size,
+	       int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_free_helper(pool,
+				   offset + words + 1,
+				   storage[words],
+				   size * 32,
+				   index);
+	else
+		r = 1; /* Check if already free */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+		if (r == 0)
+			pool->free_count++;
+	}
+
+	if (r == 0)
+		storage[*index] |= (1 << loc);
+
+	return r;
+}
+
+int
+ba_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index);
+}
+
+int
+ba_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index) + 1;
+}
+
+int
+ba_free_count(struct bitalloc *pool)
+{
+	return (int)pool->free_count;
+}
+
+int ba_inuse_count(struct bitalloc *pool)
+{
+	return (int)(pool->size) - (int)(pool->free_count);
+}
+
+static int
+ba_find_next_helper(struct bitalloc *pool,
+		    int              offset,
+		    int              words,
+		    unsigned int     size,
+		    int             *index,
+		    int              free)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc, r, bottom = 0;
+
+	if (pool->size > size)
+		r = ba_find_next_helper(pool,
+					offset + words + 1,
+					storage[words],
+					size * 32,
+					index,
+					free);
+	else
+		bottom = 1; /* Bottom of tree */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (bottom) {
+		int bit_index = *index * 32;
+
+		loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc));
+		if (loc > 0) {
+			loc--;
+			r = (bit_index + loc);
+			if (r >= (int)pool->size)
+				r = -1;
+		} else {
+			/* Loop over array at bottom of tree */
+			r = -1;
+			bit_index += 32;
+			*index = *index + 1;
+			while ((int)pool->size > bit_index) {
+				loc = ba_ffs(~storage[*index]);
+
+				if (loc > 0) {
+					loc--;
+					r = (bit_index + loc);
+					if (r >= (int)pool->size)
+						r = -1;
+					break;
+				}
+				bit_index += 32;
+				*index = *index + 1;
+			}
+		}
+	}
+
+	if (r >= 0 && (free)) {
+		if (bottom)
+			pool->free_count++;
+		storage[*index] |= (1 << loc);
+	}
+
+	return r;
+}
+
+int
+ba_find_next_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 0);
+}
+
+int
+ba_find_next_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 1);
+}
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
new file mode 100644
index 0000000..563c853
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BITALLOC_H_
+#define _BITALLOC_H_
+
+#include <stdint.h>
+
+/* Bitalloc works on uint32_t as its word size */
+typedef uint32_t bitalloc_word_t;
+
+struct bitalloc {
+	bitalloc_word_t size;
+	bitalloc_word_t free_count;
+	bitalloc_word_t storage[1];
+};
+
+#define BA_L0(s) (((s) + 31) / 32)
+#define BA_L1(s) ((BA_L0(s) + 31) / 32)
+#define BA_L2(s) ((BA_L1(s) + 31) / 32)
+#define BA_L3(s) ((BA_L2(s) + 31) / 32)
+#define BA_L4(s) ((BA_L3(s) + 31) / 32)
+
+#define BITALLOC_SIZEOF(size)                                    \
+	(sizeof(struct bitalloc) *				 \
+	 (((sizeof(struct bitalloc) +				 \
+	    sizeof(struct bitalloc) - 1 +			 \
+	    (sizeof(bitalloc_word_t) *				 \
+	     ((BA_L0(size) - 1) +				 \
+	      ((BA_L0(size) == 1) ? 0 : (BA_L1(size) + 1)) +	 \
+	      ((BA_L1(size) == 1) ? 0 : (BA_L2(size) + 1)) +	 \
+	      ((BA_L2(size) == 1) ? 0 : (BA_L3(size) + 1)) +	 \
+	      ((BA_L3(size) == 1) ? 0 : (BA_L4(size) + 1)))))) / \
+	  sizeof(struct bitalloc)))
+
+#define BITALLOC_MAX_SIZE (32 * 32 * 32 * 32 * 32 * 32)
+
+/* The instantiation of a bitalloc looks a bit odd. Since a
+ * bit allocator has variable storage, we need a way to get a
+ * a pointer to a bitalloc structure that points to the correct
+ * amount of storage. We do this by creating an array of
+ * bitalloc where the first element in the array is the
+ * actual bitalloc base structure, and the remaining elements
+ * in the array provide the storage for it. This approach allows
+ * instances to be individual variables or members of larger
+ * structures.
+ */
+#define BITALLOC_INST(name, size)                      \
+	struct bitalloc name[(BITALLOC_SIZEOF(size) /  \
+			      sizeof(struct bitalloc))]
+
+/* Symbolic return codes */
+#define BA_SUCCESS           0
+#define BA_FAIL             -1
+#define BA_ENTRY_FREE        0
+#define BA_ENTRY_IN_USE      1
+#define BA_NO_ENTRY_FOUND   -1
+
+/**
+ * Initializates the bitallocator
+ *
+ * Returns 0 on success, -1 on failure.  Size is arbitrary up to
+ * BITALLOC_MAX_SIZE
+ */
+int ba_init(struct bitalloc *pool, int size);
+
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc(struct bitalloc *pool);
+int ba_alloc_index(struct bitalloc *pool, int index);
+
+/**
+ * Query a particular index in a pool to check if its in use.
+ *
+ * Returns -1 on invalid index, 1 if the index is allocated, 0 if it
+ * is free
+ */
+int ba_inuse(struct bitalloc *pool, int index);
+
+/**
+ * Variant of ba_inuse that frees the index if it is allocated, same
+ * return codes as ba_inuse
+ */
+int ba_inuse_free(struct bitalloc *pool, int index);
+
+/**
+ * Find next index that is in use, start checking at index 'idx'
+ *
+ * Returns next index that is in use on success, or
+ * -1 if no in use index is found
+ */
+int ba_find_next_inuse(struct bitalloc *pool, int idx);
+
+/**
+ * Variant of ba_find_next_inuse that also frees the next in use index,
+ * same return codes as ba_find_next_inuse
+ */
+int ba_find_next_inuse_free(struct bitalloc *pool, int idx);
+
+/**
+ * Multiple freeing of the same index has no negative side effects,
+ * but will return -1.  returns -1 on failure, 0 on success.
+ */
+int ba_free(struct bitalloc *pool, int index);
+
+/**
+ * Returns the pool's free count
+ */
+int ba_free_count(struct bitalloc *pool);
+
+/**
+ * Returns the pool's in use count
+ */
+int ba_inuse_count(struct bitalloc *pool);
+
+#endif /* _BITALLOC_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6bafae5..3c5d55d 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,10 +7,18 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
+#include "bitalloc.h"
 #include "bnxt.h"
 
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -141,5 +149,83 @@ tf_open_session(struct tf                    *tfp,
 	return rc;
 
  cleanup_close:
+	tf_close_session(tfp);
 	return -EINVAL;
 }
+
+int
+tf_attach_session(struct tf *tfp __rte_unused,
+		  struct tf_attach_session_parms *parms __rte_unused)
+{
+#if (TF_SHARED == 1)
+	int rc;
+
+	if (tfp == NULL)
+		return -EINVAL;
+
+	/* - Open the shared memory for the attach_chan_name
+	 * - Point to the shared session for this Device instance
+	 * - Check that session is valid
+	 * - Attach to the firmware so it can record there is more
+	 *   than one client of the session.
+	 */
+
+	if (tfp->session) {
+		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+			rc = tf_msg_session_attach(tfp,
+						   parms->ctrl_chan_name,
+						   parms->session_id);
+		}
+	}
+#endif /* TF_SHARED */
+	return -1;
+}
+
+int
+tf_close_session(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	struct tf_session *tfs;
+	union tf_session_id session_id;
+
+	if (tfp == NULL || tfp->session == NULL)
+		return -EINVAL;
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "Message send failed, rc:%d\n",
+				    rc);
+		}
+
+		/* Update the ref_count */
+		tfs->ref_count--;
+	}
+
+	session_id = tfs->session_id;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	PMD_DRV_LOG(INFO,
+		    "Session closed, session_id:%d\n",
+		    session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    session_id.internal.domain,
+		    session_id.internal.bus,
+		    session_id.internal.device,
+		    session_id.internal.fw_session_id);
+
+	return rc_close;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 2b68681..e05aea7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,82 @@
 #include "hwrm_tf.h"
 
 /**
+ * Endian converts min and max values from the HW response to the query
+ */
+#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
+	(query)->hw_query[index].min =                                       \
+		tfp_le_to_cpu_16(response. element ## _min);                 \
+	(query)->hw_query[index].max =                                       \
+		tfp_le_to_cpu_16(response. element ## _max);                 \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the alloc to the request
+ */
+#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
+	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(hw_entry[index].start);                     \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
+	hw_entry[index].start =                                              \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	hw_entry[index].stride =                                             \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
+ * Endian converts min and max values from the SRAM response to the
+ * query
+ */
+#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
+	(query)->sram_query[index].min =                                     \
+		tfp_le_to_cpu_16(response.element ## _min);                  \
+	(query)->sram_query[index].max =                                     \
+		tfp_le_to_cpu_16(response.element ## _max);                  \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the action (alloc) to
+ * the request
+ */
+#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
+	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(sram_entry[index].start);                   \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
+	sram_entry[index].start =                                            \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	sram_entry[index].stride =                                           \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
  * Sends session open request to TF Firmware
  */
 int
@@ -51,6 +127,45 @@ tf_msg_session_open(struct tf *tfp,
 }
 
 /**
+ * Sends session attach request to TF Firmware
+ */
+int
+tf_msg_session_attach(struct tf *tfp __rte_unused,
+		      char *ctrl_chan_name __rte_unused,
+		      uint8_t tf_fw_session_id __rte_unused)
+{
+	return -1;
+}
+
+/**
+ * Sends session close request to TF Firmware
+ */
+int
+tf_msg_session_close(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_close_input req = { 0 };
+	struct hwrm_tf_session_close_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_CLOSE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
  * Sends session query config request to TF Firmware
  */
 int
@@ -77,3 +192,289 @@ tf_msg_session_qcfg(struct tf *tfp)
 				 &parms);
 	return rc;
 }
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_query *query)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_qcaps_input req = { 0 };
+	struct tf_session_hw_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(query, 0, sizeof(*query));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
+			    resp, meter_inst);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
+			     struct tf_rm_entry *hw_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_alloc_input req = { 0 };
+	struct tf_session_hw_resc_alloc_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			   l2_ctx_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			   prof_func_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			   prof_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			   em_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
+			   em_record_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			   wc_tcam_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
+			   wc_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
+			   meter_profiles);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
+			   meter_inst);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
+			   mirrors);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
+			   upar);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
+			   sp_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
+			   l2_func);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
+			   flex_key_templ);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			   tbl_scope);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
+			   epoch0_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
+			   epoch1_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
+			   metadata);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
+			   ct_state);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			   range_prof);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			   range_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			   lag_tbl_entries);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
+			    meter_inst);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_free(struct tf *tfp,
+			    enum tf_dir dir,
+			    struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_HW_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 20ebf2e..da5ccf3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -30,6 +30,34 @@ int tf_msg_session_open(struct tf *tfp,
 			uint8_t *fw_session_id);
 
 /**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] fw_session_id
+ *   Pointer to the fw_session_id that is assigned to the session at
+ *   time of session open
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_attach(struct tf *tfp,
+			  char *ctrl_channel_name,
+			  uint8_t tf_fw_session_id);
+
+/**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_close(struct tf *tfp);
+
+/**
  * Sends session query config request to TF Firmware
  */
 int tf_msg_session_qcfg(struct tf *tfp);
@@ -41,4 +69,18 @@ int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
 				 enum tf_dir dir,
 				 struct tf_rm_hw_query *hw_query);
 
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int tf_msg_session_hw_resc_alloc(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_alloc *hw_alloc,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int tf_msg_session_hw_resc_free(struct tf *tfp,
+				enum tf_dir dir,
+				struct tf_rm_entry *hw_entry);
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 160abac..8dbb2f9 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,11 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -43,4 +38,23 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_LAG_ENTRY,
 	TF_RESC_TYPE_HW_MAX
 };
+
+/** HW Resource types
+ */
+enum tf_resource_type_sram {
+	TF_RESC_TYPE_SRAM_FULL_ACTION,
+	TF_RESC_TYPE_SRAM_MCG,
+	TF_RESC_TYPE_SRAM_ENCAP_8B,
+	TF_RESC_TYPE_SRAM_ENCAP_16B,
+	TF_RESC_TYPE_SRAM_ENCAP_64B,
+	TF_RESC_TYPE_SRAM_SP_SMAC,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	TF_RESC_TYPE_SRAM_COUNTER_64B,
+	TF_RESC_TYPE_SRAM_NAT_SPORT,
+	TF_RESC_TYPE_SRAM_NAT_DPORT,
+	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+	TF_RESC_TYPE_SRAM_MAX
+};
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5164d6b..57ce19b 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -8,10 +8,52 @@
 
 #include "tf_resources.h"
 #include "tf_core.h"
+#include "bitalloc.h"
 
 struct tf;
 struct tf_session;
 
+/* Internal macro to determine appropriate allocation pools based on
+ * DIRECTION parm, also performs error checking for DIRECTION parm. The
+ * SESSION_POOL and SESSION pointers are set appropriately upon
+ * successful return (the GLOBAL_POOL is used to globally manage
+ * resource allocation and the SESSION_POOL is used to track the
+ * resources that have been allocated to the session)
+ *
+ * parameters:
+ *   struct tfp        *tfp
+ *   enum tf_dir        direction
+ *   struct bitalloc  **session_pool
+ *   string             base_pool_name - used to form pointers to the
+ *					 appropriate bit allocation
+ *					 pools, both directions of the
+ *					 session pools must have same
+ *					 base name, for example if
+ *					 POOL_NAME is feat_pool: - the
+ *					 ptr's to the session pools
+ *					 are feat_pool_rx feat_pool_tx
+ *
+ *  int                  rc - return code
+ *			      0 - Success
+ *			     -1 - invalid DIRECTION parm
+ */
+#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
+		(rc) = 0;						\
+		if ((direction) == TF_DIR_RX) {				\
+			*(session_pool) = (tfs)->pool_name ## _RX;	\
+		} else if ((direction) == TF_DIR_TX) {			\
+			*(session_pool) = (tfs)->pool_name ## _TX;	\
+		} else {						\
+			rc = -1;					\
+		}							\
+	} while (0)
+
+#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _RX)
+
+#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _TX)
+
 /**
  * Resource query single entry
  */
@@ -23,6 +65,16 @@ struct tf_rm_query_entry {
 };
 
 /**
+ * Resource single entry
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
  * Resource query array of HW entities
  */
 struct tf_rm_hw_query {
@@ -30,4 +82,65 @@ struct tf_rm_hw_query {
 	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
 };
 
+/**
+ * Resource allocation array of HW entities
+ */
+struct tf_rm_hw_alloc {
+	/** array of HW resource entries */
+	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+};
+
+/**
+ * Resource query array of SRAM entities
+ */
+struct tf_rm_sram_query {
+	/** array of SRAM resource entries */
+	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource allocation array of SRAM entities
+ */
+struct tf_rm_sram_alloc {
+	/** array of SRAM resource entries */
+	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Initializes the Resource Manager and the associated database
+ * entries for HW and SRAM resources. Must be called before any other
+ * Resource Manager functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ */
+void tf_rm_init(struct tf *tfp);
+
+/**
+ * Allocates and validates both HW and SRAM resources per the NVM
+ * configuration. If any allocation fails all resources for the
+ * session is deallocated.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate_validate(struct tf *tfp);
+
+/**
+ * Closes the Resource Manager and frees all allocated resources per
+ * the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTEMPTY) if resources are not cleaned up before close
+ */
+int tf_rm_close(struct tf *tfp);
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index c30ebbe..f845984 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -9,6 +9,7 @@
 #include <stdint.h>
 #include <stdlib.h>
 
+#include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 06/34] net/bnxt: add tf core session sram functions
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (4 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
                     ` (29 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session resource support functionality
- Add TruFlow session hw flush capability as well as
  sram support functions.
- Add resource definitions for session pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/rand.c         |  47 ++++
 drivers/net/bnxt/tf_core/rand.h         |  36 +++
 drivers/net/bnxt/tf_core/tf_core.c      |   1 +
 drivers/net/bnxt/tf_core/tf_msg.c       | 344 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  37 +++
 drivers/net/bnxt/tf_core/tf_resources.h | 482 ++++++++++++++++++++++++++++++++
 7 files changed, 948 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 8474673..c39c098 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,7 @@ endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/rand.c b/drivers/net/bnxt/tf_core/rand.c
new file mode 100644
index 0000000..32028df
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+
+#include <stdio.h>
+#include <stdint.h>
+#include "rand.h"
+
+#define TF_RAND_LFSR_INIT_VALUE 0xACE1u
+
+uint16_t lfsr = TF_RAND_LFSR_INIT_VALUE;
+uint32_t bit;
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ *   uint16_t number
+ */
+uint16_t rand16(void)
+{
+	bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5)) & 1;
+	return lfsr = (lfsr >> 1) | (bit << 15);
+}
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ *   uint32_t number
+ */
+uint32_t rand32(void)
+{
+	return (rand16() << 16) | rand16();
+}
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ */
+void rand_init(void)
+{
+	lfsr = TF_RAND_LFSR_INIT_VALUE;
+	bit = 0;
+}
diff --git a/drivers/net/bnxt/tf_core/rand.h b/drivers/net/bnxt/tf_core/rand.h
new file mode 100644
index 0000000..31cd76e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+#ifndef __RAND_H__
+#define __RAND_H__
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ * uint16_t number
+ *
+ */
+uint16_t rand16(void);
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ * uint32_t number
+ *
+ */
+uint32_t rand32(void);
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ *
+ * Returns:
+ *
+ */
+void rand_init(void);
+
+#endif /* __RAND_H__ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3c5d55d..d82f746 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -12,6 +12,7 @@
 #include "tfp.h"
 #include "bitalloc.h"
 #include "bnxt.h"
+#include "rand.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e05aea7..4ce2bc5 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -478,3 +478,347 @@ tf_msg_session_hw_resc_free(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_flush(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_query *query __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_qcaps_input req = { 0 };
+	struct tf_session_sram_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
+			      full_action);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
+			      sp_smac_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
+			      sp_smac_ipv6);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
+			       struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_alloc_input req = { 0 };
+	struct tf_session_sram_resc_alloc_output resp;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(&resp, 0, sizeof(resp));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			     full_action);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
+			     mcg);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			     encap_8b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			     encap_16b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			     encap_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			     sp_smac);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			     req, sp_smac_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			     req, sp_smac_ipv6);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
+			     req, counter_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			     nat_sport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			     nat_dport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			     nat_s_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			     nat_d_ipv4);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
+			      resp, full_action);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			      resp, sp_smac_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			      resp, sp_smac_ipv6);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
+			      enum tf_dir dir,
+			      struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_flush(struct tf *tfp,
+			       enum tf_dir dir,
+			       struct tf_rm_entry *sram_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index da5ccf3..057de84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -83,4 +83,41 @@ int tf_msg_session_hw_resc_alloc(struct tf *tfp,
 int tf_msg_session_hw_resc_free(struct tf *tfp,
 				enum tf_dir dir,
 				struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int tf_msg_session_hw_resc_flush(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_query *sram_query);
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int tf_msg_session_sram_resc_alloc(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_alloc *sram_alloc,
+				   struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int tf_msg_session_sram_resc_free(struct tf *tfp,
+				  enum tf_dir dir,
+				  struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int tf_msg_session_sram_resc_flush(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_entry *sram_entry);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 8dbb2f9..05e131f 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,6 +6,487 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/* Common HW resources for all chip variants */
+#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
+					    * entries
+					    */
+#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
+#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
+					    * TCAM
+					    */
+#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
+					    * IDs
+					    */
+#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
+#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
+					    * TCAM. A slices is a WC TCAM entry.
+					    */
+#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
+#define TF_NUM_METER             1024      /* < Number of meter instances */
+#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
+#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
+
+/* Wh+/Brd2 specific HW resources */
+#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
+					    * entries
+					    */
+
+/* Brd2/Brd4 specific HW resources */
+#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
+
+
+/* Brd3, Brd4 common HW resources */
+#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
+					    * templates
+					    */
+
+/* Brd4 specific HW resources */
+#define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
+#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
+#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
+#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
+#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
+					    * States
+					    */
+#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
+#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
+#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
+
+/*
+ * Common for the Reserved Resource defines below:
+ *
+ * - HW Resources
+ *   For resources where a priority level plays a role, i.e. l2 ctx
+ *   tcam entries, both a number of resources and a begin/end pair is
+ *   required. The begin/end is used to assure TFLIB gets the correct
+ *   priority setting for that resource.
+ *
+ *   For EM records there is no priority required thus a number of
+ *   resources is sufficient.
+ *
+ *   Example, TCAM:
+ *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
+ *     0-63 as HW presents 0 as the highest priority entry.
+ *
+ * - SRAM Resources
+ *   Handled as regular resources as there is no priority required.
+ *
+ * Common for these resources is that they are handled per direction,
+ * rx/tx.
+ */
+
+/* HW Resources */
+
+/* L2 CTX */
+#define TF_RSVD_L2_CTXT_TCAM_RX                   64
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
+#define TF_RSVD_L2_CTXT_TCAM_TX                   960
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
+
+/* Profiler */
+#define TF_RSVD_PROF_FUNC_RX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
+#define TF_RSVD_PROF_FUNC_TX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
+
+#define TF_RSVD_PROF_TCAM_RX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
+#define TF_RSVD_PROF_TCAM_TX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
+
+/* EM Profiles IDs */
+#define TF_RSVD_EM_PROF_ID_RX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
+#define TF_RSVD_EM_PROF_ID_TX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
+
+/* EM Records */
+#define TF_RSVD_EM_REC_RX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
+#define TF_RSVD_EM_REC_TX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
+
+/* Wildcard */
+#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
+#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
+
+#define TF_RSVD_WC_TCAM_RX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_WC_TCAM_END_IDX_RX                63
+#define TF_RSVD_WC_TCAM_TX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_WC_TCAM_END_IDX_TX                63
+
+#define TF_RSVD_METER_PROF_RX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_PROF_END_IDX_RX             0
+#define TF_RSVD_METER_PROF_TX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_PROF_END_IDX_TX             0
+
+#define TF_RSVD_METER_INST_RX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_INST_END_IDX_RX             0
+#define TF_RSVD_METER_INST_TX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_INST_END_IDX_TX             0
+
+/* Mirror */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
+#define TF_RSVD_MIRROR_END_IDX_RX                 0
+#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
+#define TF_RSVD_MIRROR_END_IDX_TX                 0
+
+/* UPAR */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_UPAR_RX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
+#define TF_RSVD_UPAR_END_IDX_RX                   0
+#define TF_RSVD_UPAR_TX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
+#define TF_RSVD_UPAR_END_IDX_TX                   0
+
+/* Source Properties */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SP_TCAM_RX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_SP_TCAM_END_IDX_RX                0
+#define TF_RSVD_SP_TCAM_TX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_SP_TCAM_END_IDX_TX                0
+
+/* L2 Func */
+#define TF_RSVD_L2_FUNC_RX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
+#define TF_RSVD_L2_FUNC_END_IDX_RX                0
+#define TF_RSVD_L2_FUNC_TX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
+#define TF_RSVD_L2_FUNC_END_IDX_TX                0
+
+/* FKB */
+#define TF_RSVD_FKB_RX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
+#define TF_RSVD_FKB_END_IDX_RX                    0
+#define TF_RSVD_FKB_TX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
+#define TF_RSVD_FKB_END_IDX_TX                    0
+
+/* TBL Scope */
+#define TF_RSVD_TBL_SCOPE_RX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
+#define TF_RSVD_TBL_SCOPE_TX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
+
+/* EPOCH0 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH0_RX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH0_END_IDX_RX                 0
+#define TF_RSVD_EPOCH0_TX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH0_END_IDX_TX                 0
+
+/* EPOCH1 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH1_RX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH1_END_IDX_RX                 0
+#define TF_RSVD_EPOCH1_TX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH1_END_IDX_TX                 0
+
+/* METADATA */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_METADATA_RX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
+#define TF_RSVD_METADATA_END_IDX_RX               0
+#define TF_RSVD_METADATA_TX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
+#define TF_RSVD_METADATA_END_IDX_TX               0
+
+/* CT_STATE */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_CT_STATE_RX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
+#define TF_RSVD_CT_STATE_END_IDX_RX               0
+#define TF_RSVD_CT_STATE_TX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
+#define TF_RSVD_CT_STATE_END_IDX_TX               0
+
+/* RANGE_PROF */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_PROF_RX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
+#define TF_RSVD_RANGE_PROF_TX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
+
+/* RANGE_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_ENTRY_RX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
+#define TF_RSVD_RANGE_ENTRY_TX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
+
+/* LAG_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_LAG_ENTRY_RX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
+#define TF_RSVD_LAG_ENTRY_TX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
+
+
+/* SRAM - Resources
+ * Limited to the types that CFA provides.
+ */
+#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
+
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SRAM_MCG_RX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
+/* Multicast Group on TX is not supported */
+#define TF_RSVD_SRAM_MCG_TX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
+
+/* First encap of 8B RX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
+/* First encap of 8B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
+
+#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
+/* First encap of 16B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
+
+/* Encap of 64B on RX is not supported */
+#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
+/* First encap of 64B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_SP_SMAC_RX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
+#define TF_RSVD_SRAM_SP_SMAC_TX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
+
+/* SRAM SP IPV4 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
+
+/* SRAM SP IPV6 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
+/* Not yet supported fully in infra */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
+
+#define TF_RSVD_SRAM_COUNTER_64B_RX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_COUNTER_64B_TX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
+
+#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
+
+#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
+
+/* HW Resource Pool names */
+
+#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
+#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
+#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
+
+#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
+#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
+#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
+
+#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
+#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
+#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
+
+#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
+#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
+#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
+
+#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
+
+#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
+#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
+#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
+
+#define TF_METER_PROF_POOL_NAME           meter_prof_pool
+#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
+#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
+
+#define TF_METER_INST_POOL_NAME           meter_inst_pool
+#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
+#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
+
+#define TF_MIRROR_POOL_NAME               mirror_pool
+#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
+#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
+
+#define TF_UPAR_POOL_NAME                 upar_pool
+#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
+#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
+
+#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
+#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
+#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
+
+#define TF_FKB_POOL_NAME                  fkb_pool
+#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
+#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
+
+#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
+#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
+#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
+
+#define TF_L2_FUNC_POOL_NAME              l2_func_pool
+#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
+#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
+
+#define TF_EPOCH0_POOL_NAME               epoch0_pool
+#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
+#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
+
+#define TF_EPOCH1_POOL_NAME               epoch1_pool
+#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
+#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
+
+#define TF_METADATA_POOL_NAME             metadata_pool
+#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
+#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
+
+#define TF_CT_STATE_POOL_NAME             ct_state_pool
+#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
+#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
+
+#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
+#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
+#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
+
+#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
+#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
+#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
+
+#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
+#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
+#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
+
+/* SRAM Resource Pool names */
+#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
+#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
+#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
+
+#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
+#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
+#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
+
+#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
+#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
+#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
+
+#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
+#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
+#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
+
+#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
+#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
+#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
+
+#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
+#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
+#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
+
+#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
+#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
+#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
+
+#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
+#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
+#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
+
+#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
+#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
+#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
+
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
+
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
+
+/* Sw Resource Pool Names */
+
+#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
+#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
+#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
+
+
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -57,4 +538,5 @@ enum tf_resource_type_sram {
 	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
 	TF_RESC_TYPE_SRAM_MAX
 };
+
 #endif /* _TF_RESOURCES_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 07/34] net/bnxt: add initial tf core resource mgmt support
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (5 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
                     ` (28 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow public API definitions for resources
  as well as RM infrastructure

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile             |    1 +
 drivers/net/bnxt/tf_core/tf_core.c    |   39 +
 drivers/net/bnxt/tf_core/tf_core.h    |  125 +++
 drivers/net/bnxt/tf_core/tf_rm.c      | 1731 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h      |  175 ++++
 drivers/net/bnxt/tf_core/tf_session.h |  208 +++-
 6 files changed, 2277 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index c39c098..02f8c3f 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index d82f746..c4f23bd 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -20,6 +20,28 @@ static inline uint32_t SWAP_WORDS32(uint32_t val32)
 		((val32 & 0xffff0000) >> 16));
 }
 
+static void tf_seeds_init(struct tf_session *session)
+{
+	int i;
+	uint32_t r;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
+		session->lkup_lkup3_init_cfg[TF_DIR_TX] = SWAP_WORDS32(rand32());
+
+	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+	}
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -109,6 +131,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
 	session->session_id.internal.domain = domain;
@@ -125,6 +148,16 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Adjust the Session with what firmware allowed us to get */
+	rc = tf_rm_allocate_validate(tfp);
+	if (rc) {
+		/* Log error */
+		goto cleanup_close;
+	}
+
+	/* Setup hash seeds */
+	tf_seeds_init(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -195,6 +228,12 @@ tf_close_session(struct tf *tfp)
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	/* Cleanup if we're last user of the session */
+	if (tfs->ref_count == 1) {
+		/* Cleanup any outstanding resources */
+		rc_close = tf_rm_close(tfp);
+	}
+
 	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 69433ac..3455d8f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -344,4 +344,129 @@ int tf_attach_session(struct tf *tfp,
  */
 int tf_close_session(struct tf *tfp);
 
+/**
+ * @page  ident Identity Management
+ *
+ * @ref tf_alloc_identifier
+ *
+ * @ref tf_free_identifier
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for Brd4
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/** Wh+/Brd2 Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+
+	/* HW */
+
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** Brd4 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** Brd4 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** Brd4 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** Brd4 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** Brd4 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** Brd4 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** Brd4 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** Brd4 only VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	/** Future - external pool of size0 entries */
+	TF_TBL_TYPE_EXT_0,
+	TF_TBL_TYPE_MAX
+};
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
new file mode 100644
index 0000000..56767e7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -0,0 +1,1731 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_rm.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_resources.h"
+#include "tf_msg.h"
+#include "bnxt.h"
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
+ *   enum tf_dir               dir         - Direction to process
+ *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
+ *                                           in the hw query structure
+ *   define                    def_value   - Define value to check against
+ *   uint32_t                 *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
+	if ((dir) == TF_DIR_RX) {					      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	} else {							      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	}								      \
+} while (0)
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
+ *   enum tf_dir                dir         - Direction to process
+ *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
+ *                                            in the hw query structure
+ *   define                     def_value   - Define value to check against
+ *   uint32_t                  *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
+	if ((dir) == TF_DIR_RX) {					       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	} else {							       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	}								       \
+} while (0)
+
+/**
+ * Internal macro to convert a reserved resource define name to be
+ * direction specific.
+ *
+ * Parameters:
+ *   enum tf_dir    dir         - Direction to process
+ *   string         type        - Type name to append RX or TX to
+ *   string         dtype       - Direction specific type
+ *
+ *
+ */
+#define TF_RESC_RSVD(dir, type, dtype) do {	\
+		if ((dir) == TF_DIR_RX)		\
+			(dtype) = type ## _RX;	\
+		else				\
+			(dtype) = type ## _TX;	\
+	} while (0)
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		break;
+	}
+	return "Invalid identifier";
+}
+
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
+{
+	switch (sram_type) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		return "Full action";
+	case TF_RESC_TYPE_SRAM_MCG:
+		return "MCG";
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		return "Encap 8B";
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		return "Encap 16B";
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		return "Encap 64B";
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		return "Source properties SMAC";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		return "Source properties SMAC IPv4";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		return "Source properties IPv6";
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		return "Counter 64B";
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		return "NAT source port";
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		return "NAT destination port";
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		return "NAT source IPv4";
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		return "NAT destination IPv4";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+/**
+ * Helper function to perform a SRAM HCAPI resource type lookup
+ * against the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
+		break;
+	case TF_RESC_TYPE_SRAM_MCG:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
+		break;
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
+ * Helper function to print all the SRAM resource qcaps errors
+ * reported in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the sram error flags created at time of the query check
+ */
+static void
+tf_rm_print_sram_qcaps_error(enum tf_dir dir,
+			     struct tf_rm_sram_query *sram_query,
+			     uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_sram_2_str(i),
+				    sram_query->sram_query[i].max,
+				    tf_rm_rsvd_sram_value(dir, i));
+	}
+}
+
+/**
+ * Performs a HW resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to HW Query data structure. Query holds what the firmware
+ *   offers of the HW resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
+			    enum tf_dir dir,
+			    uint32_t *error_flag)
+{
+	*error_flag = 0;
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_ENTRY,
+			     TF_RSVD_RANGE_ENTRY,
+			     error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Performs a SRAM resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to SRAM Query data structure. Query holds what the
+ *   firmware offers of the SRAM resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
+			      enum tf_dir dir,
+			      uint32_t *error_flag)
+{
+	*error_flag = 0;
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_FULL_ACTION,
+			       TF_RSVD_SRAM_FULL_ACTION,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_MCG,
+			       TF_RSVD_SRAM_MCG,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_8B,
+			       TF_RSVD_SRAM_ENCAP_8B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_16B,
+			       TF_RSVD_SRAM_ENCAP_16B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_64B,
+			       TF_RSVD_SRAM_ENCAP_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC,
+			       TF_RSVD_SRAM_SP_SMAC,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			       TF_RSVD_SRAM_SP_SMAC_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			       TF_RSVD_SRAM_SP_SMAC_IPV6,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_COUNTER_64B,
+			       TF_RSVD_SRAM_COUNTER_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_SPORT,
+			       TF_RSVD_SRAM_NAT_SPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_DPORT,
+			       TF_RSVD_SRAM_NAT_DPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+			       TF_RSVD_SRAM_NAT_S_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+			       TF_RSVD_SRAM_NAT_D_IPV4,
+			       error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Internal function to mark pool entries used.
+ */
+static void
+tf_rm_reserve_range(uint32_t count,
+		    uint32_t rsv_begin,
+		    uint32_t rsv_end,
+		    uint32_t max,
+		    struct bitalloc *pool)
+{
+	uint32_t i;
+
+	/* If no resources has been requested we mark everything
+	 * 'used'
+	 */
+	if (count == 0)	{
+		for (i = 0; i < max; i++)
+			ba_alloc_index(pool, i);
+	} else {
+		/* Support 2 main modes
+		 * Reserved range starts from bottom up (with
+		 * pre-reserved value or not)
+		 * - begin = 0 to end xx
+		 * - begin = 1 to end xx
+		 *
+		 * Reserved range starts from top down
+		 * - begin = yy to end max
+		 */
+
+		/* Bottom up check, start from 0 */
+		if (rsv_begin == 0) {
+			for (i = rsv_end + 1; i < max; i++)
+				ba_alloc_index(pool, i);
+		}
+
+		/* Bottom up check, start from 1 or higher OR
+		 * Top Down
+		 */
+		if (rsv_begin >= 1) {
+			/* Allocate from 0 until start */
+			for (i = 0; i < rsv_begin; i++)
+				ba_alloc_index(pool, i);
+
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the full action resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
+	uint16_t end = 0;
+
+	/* full action rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_RX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
+
+	/* full action tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_TX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the multicast group resources
+ * allocated that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
+	uint16_t end = 0;
+
+	/* multicast group rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_MCG_RX,
+			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
+
+	/* Multicast Group on TX is not supported */
+}
+
+/**
+ * Internal function to mark all the encap resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_encap(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
+	uint16_t end = 0;
+
+	/* encap 8b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_RX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
+
+	/* encap 8b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_TX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
+
+	/* encap 16b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_RX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
+
+	/* encap 16b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_TX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
+
+	/* Encap 64B not supported on RX */
+
+	/* Encap 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_64B_TX,
+			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_sp(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
+	uint16_t end = 0;
+
+	/* sp smac rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_RX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
+
+	/* sp smac tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_TX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+
+	/* SP SMAC IPv4 not supported on RX */
+
+	/* sp smac ipv4 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+
+	/* SP SMAC IPv6 not supported on RX */
+
+	/* sp smac ipv6 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the stat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_stats(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
+	uint16_t end = 0;
+
+	/* counter 64b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_RX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
+
+	/* counter 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_TX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the nat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_nat(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
+	uint16_t end = 0;
+
+	/* nat source port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_RX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
+
+	/* nat source port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_TX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
+
+	/* nat destination port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_RX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
+
+	/* nat destination port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_TX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+
+	/* nat source port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
+
+	/* nat source ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+
+	/* nat destination port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
+
+	/* nat destination ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
+}
+
+/**
+ * Internal function used to validate the HW allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_hw_alloc_validate(enum tf_dir dir,
+			struct tf_rm_hw_alloc *hw_alloc,
+			struct tf_rm_entry *hw_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				hw_alloc->hw_num[i],
+				hw_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to validate the SRAM allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
+			  struct tf_rm_sram_alloc *sram_alloc,
+			  struct tf_rm_entry *sram_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				sram_alloc->sram_num[i],
+				sram_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to mark all the HW resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_reserve_hw(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_l2_func(tfs);
+}
+
+/**
+ * Internal function used to mark all the SRAM resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_reserve_sram(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_sram_full_action(tfs);
+	tf_rm_rsvd_sram_mcg(tfs);
+	tf_rm_rsvd_sram_encap(tfs);
+	tf_rm_rsvd_sram_sp(tfs);
+	tf_rm_rsvd_sram_stats(tfs);
+	tf_rm_rsvd_sram_nat(tfs);
+}
+
+/**
+ * Internal function used to allocate and validate all HW resources.
+ */
+static int
+tf_rm_allocate_validate_hw(struct tf *tfp,
+			   enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_hw_query hw_query;
+	struct tf_rm_hw_alloc hw_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *hw_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		hw_entries = tfs->resc.rx.hw_entry;
+	else
+		hw_entries = tfs->resc.tx.hw_entry;
+
+	/* Query for Session HW Resources */
+	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		goto cleanup;
+	}
+
+	/* Post process HW capability */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
+		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
+
+	/* Allocate Session HW Resources */
+	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform HW allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Internal function used to allocate and validate all SRAM resources.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0  - Success
+ *   -1 - Internal error
+ */
+static int
+tf_rm_allocate_validate_sram(struct tf *tfp,
+			     enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_sram_query sram_query;
+	struct tf_rm_sram_alloc sram_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *sram_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a SRAM resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master SRAM Resource data base
+ *
+ * [in/out] flush_entries
+ *   Pruned SRAM Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master SRAM
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_sram_to_flush(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    struct tf_rm_entry *sram_entries,
+		    struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the sram resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_FULL_ACTION_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for RX direction */
+	if (dir == TF_DIR_RX) {
+		TF_RM_GET_POOLS_RX(tfs, &pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune TX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_8B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_16B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_SP_SMAC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_STATS_64B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_SPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_DPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_S_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_D_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
+}
+
+/**
+ * Helper function used to generate an error log for the SRAM types
+ * that needs to be flushed. The types should have been cleaned up
+ * ahead of invoking tf_close_session.
+ *
+ * [in] sram_entries
+ *   SRAM Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_sram_flush(enum tf_dir dir,
+		     struct tf_rm_entry *sram_entries)
+{
+	int i;
+
+	/* Walk the sram flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_sram_2_str(i));
+	}
+}
+
+void
+tf_rm_init(struct tf *tfp __rte_unused)
+{
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	/* This version is host specific and should be checked against
+	 * when attaching as there is no guarantee that a secondary
+	 * would run from same image version.
+	 */
+	tfs->ver.major = TF_SESSION_VER_MAJOR;
+	tfs->ver.minor = TF_SESSION_VER_MINOR;
+	tfs->ver.update = TF_SESSION_VER_UPDATE;
+
+	tfs->session_id.id = 0;
+	tfs->ref_count = 0;
+
+	/* Initialization of Table Scopes */
+	/* ll_init(&tfs->tbl_scope_ll); */
+
+	/* Initialization of HW and SRAM resource DB */
+	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
+
+	/* Initialization of HW Resource Pools */
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/* Initialization of SRAM Resource Pools
+	 * These pools are set to the TFLIB defined MAX sizes not
+	 * AFM's HW max as to limit the memory consumption
+	 */
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		TF_RSVD_SRAM_FULL_ACTION_RX);
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		TF_RSVD_SRAM_FULL_ACTION_TX);
+	/* Only Multicast Group on RX is supported */
+	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
+		TF_RSVD_SRAM_MCG_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_8B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_8B_TX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_16B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_16B_TX);
+	/* Only Encap 64B on TX is supported */
+	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_64B_TX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		TF_RSVD_SRAM_SP_SMAC_RX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_TX);
+	/* Only SP SMAC IPv4 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+	/* Only SP SMAC IPv6 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
+		TF_RSVD_SRAM_COUNTER_64B_RX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_COUNTER_64B_TX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_SPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_SPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_DPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_DPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_S_IPV4_TX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/* Initialization of pools local to TF Core */
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+}
+
+int
+tf_rm_allocate_validate(struct tf *tfp)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = tf_rm_allocate_validate_hw(tfp, i);
+		if (rc)
+			return rc;
+		rc = tf_rm_allocate_validate_sram(tfp, i);
+		if (rc)
+			return rc;
+	}
+
+	/* With both HW and SRAM allocated and validated we can
+	 * 'scrub' the reservation on the pools.
+	 */
+	tf_rm_reserve_hw(tfp);
+	tf_rm_reserve_sram(tfp);
+
+	return rc;
+}
+
+int
+tf_rm_close(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	int i;
+	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *sram_entries;
+	struct tf_rm_entry *sram_flush_entries;
+	struct tf_session *tfs __rte_unused =
+		(struct tf_session *)(tfp->session->core_data);
+
+	struct tf_rm_db flush_resc = tfs->resc;
+
+	/* On close it is assumed that the session has already cleaned
+	 * up all its resources, individually, while destroying its
+	 * flows. No checking is performed thus the behavior is as
+	 * follows.
+	 *
+	 * Session RM will signal FW to release session resources. FW
+	 * will perform invalidation of all the allocated entries
+	 * (assures any outstanding resources has been cleared, then
+	 * free the FW RM instance.
+	 *
+	 * Session will then be freed by tf_close_session() thus there
+	 * is no need to clean each resource pool as the whole session
+	 * is going away.
+	 */
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		if (i == TF_DIR_RX) {
+			hw_entries = tfs->resc.rx.hw_entry;
+			sram_entries = tfs->resc.rx.sram_entry;
+			sram_flush_entries = flush_resc.rx.sram_entry;
+		} else {
+			hw_entries = tfs->resc.tx.hw_entry;
+			sram_entries = tfs->resc.tx.sram_entry;
+			sram_flush_entries = flush_resc.tx.sram_entry;
+		}
+
+		/* Check for any not previously freed SRAM resources
+		 * and flush if required.
+		 */
+		rc = tf_rm_sram_to_flush(tfs,
+					 i,
+					 sram_entries,
+					 sram_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_sram_flush(i, sram_flush_entries);
+
+			rc = tf_msg_session_sram_resc_flush(tfp,
+							    i,
+							    sram_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
+		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, HW free failed\n",
+				    tf_dir_2_str(i));
+		}
+
+		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, SRAM free failed\n",
+				    tf_dir_2_str(i));
+		}
+	}
+
+	return rc_close;
+}
+
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type)
+{
+	int rc = 0;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
+		break;
+	case TF_TBL_TYPE_UPAR:
+		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
+		break;
+	case TF_TBL_TYPE_METADATA:
+		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
+		break;
+	case TF_TBL_TYPE_LAG:
+		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		*hcapi_type = -1;
+		rc = -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index)
+{
+	int rc;
+	struct tf_rm_resc *resc;
+	uint32_t hcapi_type;
+	uint32_t base_index;
+
+	if (dir == TF_DIR_RX)
+		resc = &tfs->resc.rx;
+	else if (dir == TF_DIR_TX)
+		resc = &tfs->resc.tx;
+	else
+		return -EOPNOTSUPP;
+
+	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
+	if (rc)
+		return -1;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+	case TF_TBL_TYPE_MCAST_GROUPS:
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+	case TF_TBL_TYPE_ACT_STATS_64:
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		base_index = resc->sram_entry[hcapi_type].start;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+	case TF_TBL_TYPE_METER_PROF:
+	case TF_TBL_TYPE_METER_INST:
+	case TF_TBL_TYPE_UPAR:
+	case TF_TBL_TYPE_EPOCH0:
+	case TF_TBL_TYPE_EPOCH1:
+	case TF_TBL_TYPE_METADATA:
+	case TF_TBL_TYPE_CT_STATE:
+	case TF_TBL_TYPE_RANGE_PROF:
+	case TF_TBL_TYPE_RANGE_ENTRY:
+	case TF_TBL_TYPE_LAG:
+		base_index = resc->hw_entry[hcapi_type].start;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	switch (c_type) {
+	case TF_RM_CONVERT_RM_BASE:
+		*convert_index = index - base_index;
+		break;
+	case TF_RM_CONVERT_ADD_BASE:
+		*convert_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 57ce19b..e69d443 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -107,6 +107,54 @@ struct tf_rm_sram_alloc {
 };
 
 /**
+ * Resource Manager arrays for a single direction
+ */
+struct tf_rm_resc {
+	/** array of HW resource entries */
+	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
+	/** array of SRAM resource entries */
+	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource Manager Database
+ */
+struct tf_rm_db {
+	struct tf_rm_resc rx;
+	struct tf_rm_resc tx;
+};
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function used to convert HW HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+
+/**
+ * Helper function used to convert SRAM HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+
+/**
  * Initializes the Resource Manager and the associated database
  * entries for HW and SRAM resources. Must be called before any other
  * Resource Manager functions.
@@ -143,4 +191,131 @@ int tf_rm_allocate_validate(struct tf *tfp);
  *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
 int tf_rm_close(struct tf *tfp);
+
+#if (TF_SHADOW == 1)
+/**
+ * Initializes Shadow DB of configuration elements
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * Returns:
+ *  0  - Success
+ */
+int tf_rm_shadow_db_init(struct tf_session *tfs);
+#endif /* TF_SHADOW */
+
+/**
+ * Perform a Session Pool lookup using the Tcam table type.
+ *
+ * Function will print error msg if tcam type is unsupported or lookup
+ * failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool);
+
+/**
+ * Perform a Session Pool lookup using the Table type.
+ *
+ * Function will print error msg if table type is unsupported or
+ * lookup failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool);
+
+/**
+ * Converts the TF Table Type to internal HCAPI_TYPE
+ *
+ * [in] type
+ *   Type to be converted
+ *
+ * [in/out] hcapi_type
+ *   Converted type
+ *
+ * Returns:
+ *  0           - Success will set the *hcapi_type
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type);
+
+/**
+ * TF RM Convert of index methods.
+ */
+enum tf_rm_convert_type {
+	/** Adds the base of the Session Pool to the index */
+	TF_RM_CONVERT_ADD_BASE,
+	/** Removes the Session Pool base from the index */
+	TF_RM_CONVERT_RM_BASE
+};
+
+/**
+ * Provides conversion of the Table Type index in relation to the
+ * Session Pool base.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] c_type
+ *   Type of conversion to perform
+ *
+ * [in] index
+ *   Index to be converted
+ *
+ * [in/out]  convert_index
+ *   Pointer to the converted index
+ */
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index);
+
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index f845984..34b6c41 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -68,7 +68,7 @@ struct tf_session {
 	 */
 	bool shadow_copy;
 
-	/** 
+	/**
 	 * Session Reference Count. To keep track of functions per
 	 * session the ref_count is incremented. There is also a
 	 * parallel TruFlow Firmware ref_count in case the TruFlow
@@ -76,11 +76,215 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Session HW and SRAM resources */
+	struct tf_rm_db resc;
+
+	/* Session HW resource pools */
+
+	/** RX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/** RX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	/** TX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+
+	/** RX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	/** TX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+
+	/** RX EM Profile ID Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	/** TX EM Key Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/** RX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	/** TX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records are not controlled by way of a pool */
+
+	/** RX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	/** TX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+
+	/** RX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	/** TX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+
+	/** RX Meter Instance Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	/** TX Meter Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+
+	/** RX Mirror Configuration Pool*/
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	/** RX Mirror Configuration Pool */
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+
+	/** RX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	/** TX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	/** RX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	/** TX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	/** RX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	/** TX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	/** RX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	/** TX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+
+	/** RX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	/** TX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+
+	/** RX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	/** TX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+
+	/** RX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	/** TX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+
+	/** RX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	/** TX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+
+	/** RX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	/** TX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+
+	/** RX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	/** TX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+
+	/** RX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	/** TX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
+
+	/* Session SRAM pools */
+
+	/** RX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		      TF_RSVD_SRAM_FULL_ACTION_RX);
+	/** TX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		      TF_RSVD_SRAM_FULL_ACTION_TX);
+
+	/** RX Multicast Group Pool, only RX is supported */
+	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
+		      TF_RSVD_SRAM_MCG_RX);
+
+	/** RX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_8B_RX);
+	/** TX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_8B_TX);
+
+	/** RX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_16B_RX);
+	/** TX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_16B_TX);
+
+	/** TX Encap 64B Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_64B_TX);
+
+	/** RX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		      TF_RSVD_SRAM_SP_SMAC_RX);
+	/** TX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_TX);
+
+	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+
+	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+
+	/** RX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_COUNTER_64B_RX);
+	/** TX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_COUNTER_64B_TX);
+
+	/** RX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_SPORT_RX);
+	/** TX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_SPORT_TX);
+
+	/** RX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_DPORT_RX);
+	/** TX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_DPORT_TX);
+
+	/** RX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	/** TX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
+
+	/** RX NAT Destination IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	/** TX NAT IPv4 Destination Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/**
+	 * Pools not allocated from HCAPI RM
+	 */
+
+	/** RX L2 Ctx Remap ID  Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 Ctx Remap ID Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
 	/** CRC32 seed table */
 #define TF_LKUP_SEED_MEM_SIZE 512
 	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 };
+
 #endif /* _TF_SESSION_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 08/34] net/bnxt: add resource manager functionality
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (6 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
                     ` (27 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow RM functionality for resource handling
- Update the TruFlow Resource Manager (RM) with resource
  support functions for debugging as well as resource cleanup.
- Add support for Internal and external pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c    |   14 +
 drivers/net/bnxt/tf_core/tf_core.h    |   26 +
 drivers/net/bnxt/tf_core/tf_rm.c      | 1718 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h |   10 +
 drivers/net/bnxt/tf_core/tf_tbl.h     |   43 +
 5 files changed, 1735 insertions(+), 76 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index c4f23bd..259ffa2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -148,6 +148,20 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Shadow DB configuration */
+	if (parms->shadow_copy) {
+		/* Ignore shadow_copy setting */
+		session->shadow_copy = 0;/* parms->shadow_copy; */
+#if (TF_SHADOW == 1)
+		rc = tf_rm_shadow_db_init(tfs);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%d",
+				    rc);
+		/* Add additional processing */
+#endif /* TF_SHADOW */
+	}
+
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3455d8f..16c8251 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -30,6 +30,32 @@ enum tf_dir {
 	TF_DIR_MAX
 };
 
+/**
+ * External pool size
+ *
+ * Defines a single pool of external action records of
+ * fixed size.  Currently, this is an index.
+ */
+#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
+
+/**
+ *  External pool entry count
+ *
+ *  Defines the number of entries in the external action pool
+ */
+#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
+
+/**
+ * Number of external pools
+ */
+#define TF_EXT_POOL_CNT_MAX 1
+
+/**
+ * External pool Id
+ */
+#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
+#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 56767e7..a5e96f29 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -104,9 +104,82 @@ const char
 	case TF_IDENT_TYPE_L2_FUNC:
 		return "l2_func";
 	default:
-		break;
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
+{
+	switch (hw_type) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		return "L2 ctxt tcam";
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		return "Profile Func";
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		return "Profile tcam";
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		return "EM profile id";
+	case TF_RESC_TYPE_HW_EM_REC:
+		return "EM record";
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		return "WC tcam profile id";
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		return "WC tcam";
+	case TF_RESC_TYPE_HW_METER_PROF:
+		return "Meter profile";
+	case TF_RESC_TYPE_HW_METER_INST:
+		return "Meter instance";
+	case TF_RESC_TYPE_HW_MIRROR:
+		return "Mirror";
+	case TF_RESC_TYPE_HW_UPAR:
+		return "UPAR";
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		return "Source properties tcam";
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		return "L2 Function";
+	case TF_RESC_TYPE_HW_FKB:
+		return "FKB";
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		return "Table scope";
+	case TF_RESC_TYPE_HW_EPOCH0:
+		return "EPOCH0";
+	case TF_RESC_TYPE_HW_EPOCH1:
+		return "EPOCH1";
+	case TF_RESC_TYPE_HW_METADATA:
+		return "Metadata";
+	case TF_RESC_TYPE_HW_CT_STATE:
+		return "Connection tracking state";
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		return "Range profile";
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		return "Range entry";
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		return "LAG";
+	default:
+		return "Invalid identifier";
 	}
-	return "Invalid identifier";
 }
 
 const char
@@ -145,6 +218,93 @@ const char
 }
 
 /**
+ * Helper function to perform a HW HCAPI resource type lookup against
+ * the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_REC:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_INST:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
+		break;
+	case TF_RESC_TYPE_HW_MIRROR:
+		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
+		break;
+	case TF_RESC_TYPE_HW_UPAR:
+		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
+		break;
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_FKB:
+		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
+		break;
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH0:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH1:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
+		break;
+	case TF_RESC_TYPE_HW_METADATA:
+		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
+		break;
+	case TF_RESC_TYPE_HW_CT_STATE:
+		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
+		break;
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
  * Helper function to perform a SRAM HCAPI resource type lookup
  * against the reserved value of the same static type.
  *
@@ -205,6 +365,36 @@ tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
 }
 
 /**
+ * Helper function to print all the HW resource qcaps errors reported
+ * in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the hw error flags created at time of the query check
+ */
+static void
+tf_rm_print_hw_qcaps_error(enum tf_dir dir,
+			   struct tf_rm_hw_query *hw_query,
+			   uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_hw_2_str(i),
+				    hw_query->hw_query[i].max,
+				    tf_rm_rsvd_hw_value(dir, i));
+	}
+}
+
+/**
  * Helper function to print all the SRAM resource qcaps errors
  * reported in the error_flag.
  *
@@ -264,12 +454,139 @@ tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
 			    uint32_t *error_flag)
 {
 	*error_flag = 0;
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+			     TF_RSVD_L2_CTXT_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_FUNC,
+			     TF_RSVD_PROF_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_TCAM,
+			     TF_RSVD_PROF_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_PROF_ID,
+			     TF_RSVD_EM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_REC,
+			     TF_RSVD_EM_REC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+			     TF_RSVD_WC_TCAM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM,
+			     TF_RSVD_WC_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_PROF,
+			     TF_RSVD_METER_PROF,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_INST,
+			     TF_RSVD_METER_INST,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_MIRROR,
+			     TF_RSVD_MIRROR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_UPAR,
+			     TF_RSVD_UPAR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_SP_TCAM,
+			     TF_RSVD_SP_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_FUNC,
+			     TF_RSVD_L2_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_FKB,
+			     TF_RSVD_FKB,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_TBL_SCOPE,
+			     TF_RSVD_TBL_SCOPE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH0,
+			     TF_RSVD_EPOCH0,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH1,
+			     TF_RSVD_EPOCH1,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METADATA,
+			     TF_RSVD_METADATA,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_CT_STATE,
+			     TF_RSVD_CT_STATE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_PROF,
+			     TF_RSVD_RANGE_PROF,
+			     error_flag);
+
 	TF_RM_CHECK_HW_ALLOC(query,
 			     dir,
 			     TF_RESC_TYPE_HW_RANGE_ENTRY,
 			     TF_RSVD_RANGE_ENTRY,
 			     error_flag);
 
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_LAG_ENTRY,
+			     TF_RSVD_LAG_ENTRY,
+			     error_flag);
+
 	if (*error_flag != 0)
 		return -ENOMEM;
 
@@ -434,26 +751,584 @@ tf_rm_reserve_range(uint32_t count,
 			for (i = 0; i < rsv_begin; i++)
 				ba_alloc_index(pool, i);
 
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the profile tcam and profile func
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
+	uint32_t end = 0;
+
+	/* profile func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
+
+	/* profile func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_PROF_TCAM;
+
+	/* profile tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
+
+	/* profile tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the em profile id allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_em_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
+	uint32_t end = 0;
+
+	/* em prof id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
+
+	/* em prof id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the wildcard tcam and profile id
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_wc(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
+	uint32_t end = 0;
+
+	/* wc profile id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
+
+	/* wc profile id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_WC_TCAM;
+
+	/* wc tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_RX);
+
+	/* wc tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the meter resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_meter(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
+	uint32_t end = 0;
+
+	/* meter profiles rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_RX);
+
+	/* meter profiles tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_METER_INST;
+
+	/* meter rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_RX);
+
+	/* meter tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the mirror resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_mirror(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
+	uint32_t end = 0;
+
+	/* mirror rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_RX);
+
+	/* mirror tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the upar resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_upar(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_UPAR;
+	uint32_t end = 0;
+
+	/* upar rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_RX);
+
+	/* upar tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp tcam resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
+	uint32_t end = 0;
+
+	/* sp tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_RX);
+
+	/* sp tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the fkb resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_fkb(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_FKB;
+	uint32_t end = 0;
+
+	/* fkb rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_RX);
+
+	/* fkb tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the tbld scope resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
+	uint32_t end = 0;
+
+	/* tbl scope rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
+
+	/* tbl scope tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 epoch resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_epoch(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
+	uint32_t end = 0;
+
+	/* epoch0 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_RX);
+
+	/* epoch0 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_EPOCH1;
+
+	/* epoch1 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_RX);
+
+	/* epoch1 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the metadata resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_metadata(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METADATA;
+	uint32_t end = 0;
+
+	/* metadata rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_RX);
+
+	/* metadata tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the ct state resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_ct_state(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
+	uint32_t end = 0;
+
+	/* ct state rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_RX);
+
+	/* ct state tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
+ * Internal function to mark all the range resources allocated that
+ * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+tf_rm_rsvd_range(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
 	uint32_t end = 0;
 
-	/* l2 ctxt rx direction */
+	/* range profile rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -461,10 +1336,10 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
 
-	/* l2 ctxt tx direction */
+	/* range profile tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -472,21 +1347,45 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
+
+	/* range entry rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
+
+	/* range entry tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 func resources allocated that
+ * Internal function to mark all the lag resources allocated that
  * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
+tf_rm_rsvd_lag_entry(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
 	uint32_t end = 0;
 
-	/* l2 func rx direction */
+	/* lag entry rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -494,10 +1393,10 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
 
-	/* l2 func tx direction */
+	/* lag entry tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -505,8 +1404,8 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
 }
 
 /**
@@ -909,7 +1808,21 @@ tf_rm_reserve_hw(struct tf *tfp)
 	 * used except the resources that Truflow took ownership off.
 	 */
 	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_prof(tfs);
+	tf_rm_rsvd_em_prof(tfs);
+	tf_rm_rsvd_wc(tfs);
+	tf_rm_rsvd_mirror(tfs);
+	tf_rm_rsvd_meter(tfs);
+	tf_rm_rsvd_upar(tfs);
+	tf_rm_rsvd_sp_tcam(tfs);
 	tf_rm_rsvd_l2_func(tfs);
+	tf_rm_rsvd_fkb(tfs);
+	tf_rm_rsvd_tbl_scope(tfs);
+	tf_rm_rsvd_epoch(tfs);
+	tf_rm_rsvd_metadata(tfs);
+	tf_rm_rsvd_ct_state(tfs);
+	tf_rm_rsvd_range(tfs);
+	tf_rm_rsvd_lag_entry(tfs);
 }
 
 /**
@@ -972,6 +1885,7 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
 			tf_dir_2_str(dir),
 			error_flag);
+		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
 
@@ -1032,65 +1946,388 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	struct tf_rm_entry *sram_entries;
 	uint32_t error_flag;
 
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a HW resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master HW Resource database
+ *
+ * [in/out] flush_entries
+ *   Pruned HW Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master HW
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_hw_to_flush(struct tf_session *tfs,
+		  enum tf_dir dir,
+		  struct tf_rm_entry *hw_entries,
+		  struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the hw resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_CTXT_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_INST_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_MIRROR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_UPAR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SP_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_FKB_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
+		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_TBL_SCOPE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
+	} else {
+		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+			    tf_dir_2_str(dir),
+			    free_cnt,
+			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
+		flush_rc = 1;
 	}
 
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
-			tf_dir_2_str(dir),
-			error_flag);
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH0_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH1_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METADATA_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_CT_STATE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	return 0;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
- cleanup:
-	return -1;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_LAG_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
 }
 
 /**
@@ -1335,6 +2572,32 @@ tf_rm_sram_to_flush(struct tf_session *tfs,
 }
 
 /**
+ * Helper function used to generate an error log for the HW types that
+ * needs to be flushed. The types should have been cleaned up ahead of
+ * invoking tf_close_session.
+ *
+ * [in] hw_entries
+ *   HW Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_hw_flush(enum tf_dir dir,
+		   struct tf_rm_entry *hw_entries)
+{
+	int i;
+
+	/* Walk the hw flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_hw_2_str(i));
+	}
+}
+
+/**
  * Helper function used to generate an error log for the SRAM types
  * that needs to be flushed. The types should have been cleaned up
  * ahead of invoking tf_close_session.
@@ -1386,6 +2649,53 @@ tf_rm_init(struct tf *tfp __rte_unused)
 	/* Initialization of HW Resource Pools */
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records should not be controlled by way of a pool */
+
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
 
 	/* Initialization of SRAM Resource Pools
 	 * These pools are set to the TFLIB defined MAX sizes not
@@ -1476,6 +2786,7 @@ tf_rm_close(struct tf *tfp)
 	int rc_close = 0;
 	int i;
 	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *hw_flush_entries;
 	struct tf_rm_entry *sram_entries;
 	struct tf_rm_entry *sram_flush_entries;
 	struct tf_session *tfs __rte_unused =
@@ -1501,14 +2812,41 @@ tf_rm_close(struct tf *tfp)
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		if (i == TF_DIR_RX) {
 			hw_entries = tfs->resc.rx.hw_entry;
+			hw_flush_entries = flush_resc.rx.hw_entry;
 			sram_entries = tfs->resc.rx.sram_entry;
 			sram_flush_entries = flush_resc.rx.sram_entry;
 		} else {
 			hw_entries = tfs->resc.tx.hw_entry;
+			hw_flush_entries = flush_resc.tx.hw_entry;
 			sram_entries = tfs->resc.tx.sram_entry;
 			sram_flush_entries = flush_resc.tx.sram_entry;
 		}
 
+		/* Check for any not previously freed HW resources and
+		 * flush if required.
+		 */
+		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering HW resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_hw_flush(i, hw_flush_entries);
+			rc = tf_msg_session_hw_resc_flush(tfp,
+							  i,
+							  hw_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
 		/* Check for any not previously freed SRAM resources
 		 * and flush if required.
 		 */
@@ -1560,6 +2898,234 @@ tf_rm_close(struct tf *tfp)
 	return rc_close;
 }
 
+#if (TF_SHADOW == 1)
+int
+tf_rm_shadow_db_init(struct tf_session *tfs)
+{
+	rc = 1;
+
+	return rc;
+}
+#endif /* TF_SHADOW */
+
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_L2_CTXT_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_PROF_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_WC_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Tcam type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "%s:, Tcam type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_FULL_ACTION_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_TX)
+			break;
+		TF_RM_GET_POOLS_RX(tfs, pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_8B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_16B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		/* No pools for RX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_SP_SMAC_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_STATS_64B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_SPORT_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_S_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_D_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_INST_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_MIRROR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_UPAR:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_UPAR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH0_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH1_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METADATA:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METADATA_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_CT_STATE_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_ENTRY_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_LAG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_LAG_ENTRY_POOL_NAME,
+				rc);
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+		break;
+	/* No bitalloc pools for these types */
+	case TF_TBL_TYPE_EXT:
+	case TF_TBL_TYPE_EXT_0:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type lookup failed, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_rm_convert_tbl_type(enum tf_tbl_type type,
 		       uint32_t *hcapi_type)
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 34b6c41..fed34f1 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -12,6 +12,7 @@
 #include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_tbl.h"
 
 /** Session defines
  */
@@ -285,6 +286,15 @@ struct tf_session {
 
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+	/** Table scope array */
+	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/** Each external pool is associated with a single table scope
+	 *  For each external pool store the associated table scope in
+	 *  this data structure
+	 */
+	uint32_t ext_pool_2_scope[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
new file mode 100644
index 0000000..5a5e72f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TBL_H_
+#define _TF_TBL_H_
+
+#include <stdint.h>
+
+enum tf_pg_tbl_lvl {
+	PT_LVL_0,
+	PT_LVL_1,
+	PT_LVL_2,
+	PT_LVL_MAX
+};
+
+/** Invalid table scope id */
+#define TF_TBL_SCOPE_INVALID 0xffffffff
+
+/**
+ * Table Scope Control Block
+ *
+ * Holds private data for a table scope. Only one instance of a table
+ * scope with Internal EM is supported.
+ */
+struct tf_tbl_scope_cb {
+	uint32_t tbl_scope_id;
+	int index;
+	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
+};
+
+/**
+ * Initialize table pool structure to indicate
+ * no table scope has been associated with the
+ * external pool of indexes.
+ *
+ * [in] session
+ */
+void
+tf_init_tbl_pool(struct tf_session *session);
+
+#endif /* _TF_TBL_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 09/34] net/bnxt: add tf core identifier support
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (7 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
                     ` (26 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith

From: Farah Smith <farah.smith@broadcom.com>

- Add TruFlow Identifier resource support
- Add TruFlow public API for Identifier resources.
- Add support code and stack for Identifier resource allocation control.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 156 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h |  55 +++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  13 ++++
 3 files changed, 224 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 259ffa2..037f7d1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -283,3 +283,159 @@ tf_close_session(struct tf *tfp)
 
 	return rc_close;
 }
+
+/** allocate identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	int id;
+	int rc;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	id = ba_alloc(session_pool);
+
+	if (id == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return -ENOMEM;
+	}
+	parms->id = id;
+	return 0;
+}
+
+/** free identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	int rc;
+	int ba_rc;
+	struct tf_session *tfs;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	ba_rc = ba_inuse(session_pool, (int)parms->id);
+
+	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type),
+			    parms->id);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->id);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 16c8251..afad9ea 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -402,6 +402,61 @@ enum tf_identifier_type {
 	TF_IDENT_TYPE_L2_FUNC
 };
 
+/** tf_alloc_identifier parameter definition
+ */
+struct tf_alloc_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/** tf_free_identifier parameter definition
+ */
+struct tf_free_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/** allocate identifier resource
+ *
+ * TruFlow core will allocate a free id from the per identifier resource type
+ * pool reserved for the session during tf_open().  No firmware is involved.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms);
+
+/** free identifier resource
+ *
+ * TruFlow core will return an id back to the per identifier resource type pool
+ * reserved for the session.  No firmware is involved.  During tf_close, the
+ * complete pool is returned to the firmware.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms);
+
 /**
  * TCAM table type
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 4ce2bc5..c44f96f 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -94,6 +94,19 @@
 } while (0)
 
 /**
+ * This is the MAX data we can transport across regular HWRM
+ */
+#define TF_PCI_BUF_SIZE_MAX 88
+
+/**
+ * If data bigger than TF_PCI_BUF_SIZE_MAX then use DMA method
+ */
+struct tf_msg_dma_buf {
+	void *va_addr;
+	uint64_t pa_addr;
+};
+
+/**
  * Sends session open request to TF Firmware
  */
 int
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 10/34] net/bnxt: add tf core TCAM support
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (8 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
                     ` (25 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Jay Ding

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow TCAM public API functions
- Add TCAM support functions as well as public APIs.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 163 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h | 227 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  | 159 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  30 +++++
 4 files changed, 579 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 037f7d1..152cfa2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -439,3 +439,166 @@ int tf_free_identifier(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tcam_entry(struct tf *tfp,
+		    struct tf_alloc_tcam_entry_parms *parms)
+{
+	int rc;
+	int index;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
+	}
+
+	parms->idx = index;
+	return 0;
+}
+
+int
+tf_set_tcam_entry(struct tf *tfp,
+		  struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	int id;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/*
+	 * Each tcam send msg function should check for key sizes range
+	 */
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, parms->idx);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "%s: %s: Invalid or not allocated index, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+
+	return rc;
+}
+
+int
+tf_get_tcam_entry(struct tf *tfp __rte_unused,
+		  struct tf_get_tcam_entry_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+int
+tf_free_tcam_entry(struct tf *tfp,
+		   struct tf_free_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	rc = ba_inuse(session_pool, (int)parms->idx);
+	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->idx);
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index afad9ea..1431d06 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -472,6 +472,233 @@ enum tf_tcam_tbl_type {
 };
 
 /**
+ * @page tcam TCAM Access
+ *
+ * @ref tf_alloc_tcam_entry
+ *
+ * @ref tf_set_tcam_entry
+ *
+ * @ref tf_get_tcam_entry
+ *
+ * @ref tf_free_tcam_entry
+ */
+
+/** tf_alloc_tcam_entry parameter definition
+ */
+struct tf_alloc_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/** allocate TCAM entry
+ *
+ * Allocate a TCAM entry - one of these types:
+ *
+ * L2 Context
+ * Profile TCAM
+ * WC TCAM
+ * VEB TCAM
+ *
+ * This function allocates a TCAM table record.	 This function
+ * will attempt to allocate a TCAM table entry from the session
+ * owned TCAM entries or search a shadow copy of the TCAM table for a
+ * matching entry if search is enabled.	 Key, mask and result must match for
+ * hit to be set.  Only TruFlow core data is accessed.
+ * A hash table to entry mapping is maintained for search purposes.  If
+ * search is not enabled, the first available free entry is returned based
+ * on priority and alloc_cnt is set to 1.  If search is enabled and a matching
+ * entry to entry_data is found, hit is set to TRUE and alloc_cnt is set to 1.
+ * RefCnt is also returned.
+ *
+ * Also returns success or failure code.
+ */
+int tf_alloc_tcam_entry(struct tf *tfp,
+			struct tf_alloc_tcam_entry_parms *parms);
+
+/** tf_set_tcam_entry parameter definition
+ */
+struct	tf_set_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] base index of the entry to program
+	 */
+	uint16_t idx;
+	/**
+	 * [in] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [in] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** set TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tcam_entry(struct tf	*tfp,
+		      struct tf_set_tcam_entry_parms *parms);
+
+/** tf_get_tcam_entry parameter definition
+ */
+struct tf_get_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type  tcam_tbl_type;
+	/**
+	 * [in] index of the entry to get
+	 */
+	uint16_t idx;
+	/**
+	 * [out] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [out] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [out] key size in bits
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [out] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** get TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_get_tcam_entry(struct tf *tfp,
+		      struct tf_get_tcam_entry_parms *parms);
+
+/** tf_free_tcam_entry parameter definition
+ */
+struct tf_free_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t idx;
+	/**
+	 * [out] reference count after free
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free TCAM entry
+ *
+ * Free TCAM entry.
+ *
+ * Firmware checks to ensure the TCAM entries are owned by the TruFlow
+ * session.  TCAM entry will be invalidated.  All-ones mask.
+ * writes to hw.
+ *
+ * WCTCAM profile id of 0 must be used to invalidate an entry.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tcam_entry(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *parms);
+
+/**
+ * @page table Table Access
+ *
+ * @ref tf_alloc_tbl_entry
+ *
+ * @ref tf_free_tbl_entry
+ *
+ * @ref tf_set_tbl_entry
+ *
+ * @ref tf_get_tbl_entry
+ */
+
+/**
  * Enumeration of TruFlow table types. A table type is used to identify a
  * resource object.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c44f96f..f4b2f4c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -106,6 +106,39 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
+static int
+tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
+		   uint32_t *hwrm_type)
+{
+	int rc = 0;
+
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		rc = -EOPNOTSUPP;
+		break;
+	}
+
+	return rc;
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -835,3 +868,129 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
+static int
+tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate tcam dma entry, rc:%d\n",
+			    rc);
+		return -ENOMEM;
+	}
+
+	buf->pa_addr = (uint64_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	uint16_t key_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	uint16_t result_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
+
+	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
+
+	req.key_size = key_bytes;
+	req.mask_offset = key_bytes;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * key_bytes;
+	req.result_size = result_bytes;
+	data_size = 2 * req.key_size + req.result_size;
+
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_get_dma_buf(&buf, data_size);
+		if (rc != 0)
+			return rc;
+		data = buf.va_addr;
+		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+	}
+
+	memcpy(&data[0], parms->key, key_bytes);
+	memcpy(&data[key_bytes], parms->mask, key_bytes);
+	memcpy(&data[req.result_offset], parms->result, result_bytes);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &mparms);
+	if (rc)
+		return rc;
+
+	if (buf.va_addr != NULL)
+		tfp_free(buf.va_addr);
+
+	return rc;
+}
+
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *in_parms)
+{
+	int rc;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
+
+	parms.tf_type = HWRM_TF_TCAM_FREE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 057de84..fa74d78 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -120,4 +120,34 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends tcam entry 'set' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_set(struct tf *tfp,
+			  struct tf_set_tcam_entry_parms *parms);
+
+/**
+ * Sends tcam entry 'free' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_free(struct tf *tfp,
+			   struct tf_free_tcam_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 11/34] net/bnxt: add tf core table scope support
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (9 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
                     ` (24 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith, Michael Wildt

From: Farah Smith <farah.smith@broadcom.com>

- Added TruFlow Table public API
- Added Table Scope capability including Table Type support code for
  setting and getting Table Types.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile          |   1 +
 drivers/net/bnxt/tf_core/hwrm_tf.h |  21 ++++++
 drivers/net/bnxt/tf_core/tf_core.c |   4 ++
 drivers/net/bnxt/tf_core/tf_core.h | 128 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  81 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  63 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c  |  43 +++++++++++++
 7 files changed, 341 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 02f8c3f..6714a6a 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -52,6 +52,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index a8a5547..acb9a8b 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -891,6 +891,27 @@ typedef struct tf_session_sram_resc_flush_input {
 } tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
 BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
 
+/* Input params for table type set */
+typedef struct tf_tbl_type_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+	/* Index to set */
+	uint32_t			 index;
+} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_set_input_t) <= TF_MAX_REQ_SIZE);
+
 /* Input params for table type get */
 typedef struct tf_tbl_type_get_input {
 	/* Session Id */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 152cfa2..2833de2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,6 +7,7 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -172,6 +173,9 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize external pool data structures */
+	tf_init_tbl_pool(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1431d06..4c90677 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -458,6 +458,134 @@ int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
 
 /**
+ * @page dram_table DRAM Table Scope Interface
+ *
+ * @ref tf_alloc_tbl_scope
+ *
+ * @ref tf_free_tbl_scope
+ *
+ * If we allocate the EEM memory from the core, we need to store it in
+ * the shared session data structure to make sure it can be freed later.
+ * (for example if the PF goes away)
+ *
+ * Current thought is that memory is allocated within core.
+ */
+
+
+/** tf_alloc_tbl_scope_parms definition
+ */
+struct tf_alloc_tbl_scope_parms {
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t rx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t rx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 * Use this variable or the number of flows, do not set both.
+	 */
+	uint32_t rx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000. If set, rx_mem_size_in_mb must equal 0.
+	 */
+	uint32_t rx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t rx_tbl_if_id;
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t tx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t tx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 */
+	uint32_t tx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000
+	 */
+	uint32_t tx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t tx_tbl_if_id;
+	/**
+	 * [out] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+struct tf_free_tbl_scope_parms {
+	/**
+	 * [in] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+/**
+ * allocate a table scope
+ *
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * is a software construct to identify an EEM table.  This function will
+ * divide the hash memory/buckets and records according to the device
+ * device constraints based upon calculations using either the number of flows
+ * requested or the size of memory indicated.  Other parameters passed in
+ * determine the configuration (maximum key size, maximum external action record
+ * size.
+ *
+ * This API will allocate the table region in
+ * DRAM, program the PTU page table entries, and program the number of static
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * 0 in the EM memory for the scope.  Upon successful completion of this API,
+ * hash tables are fully initialized and ready for entries to be inserted.
+ *
+ * A single API is used to allocate a common table scope identifier in both
+ * receive and transmit CFA. The scope identifier is common due to nature of
+ * connection tracking sending notifications between RX and TX direction.
+ *
+ * The receive and transmit table access identifiers specify which rings will
+ * be used to initialize table DRAM.  The application must ensure mutual
+ * exclusivity of ring usage for table scope allocation and any table update
+ * operations.
+ *
+ * The hash table buckets, EM keys, and EM lookup results are stored in the
+ * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
+ * hash table buckets are stored at the beginning of that memory.
+ *
+ * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * internally by TruFlow core.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms);
+
+
+/**
+ * free a table scope
+ *
+ * Firmware checks that the table scope ID is owned by the TruFlow
+ * session, verifies that no references to this table scope remains
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * then frees the table scope ID.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_scope(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms);
+
+/**
  * TCAM table type
  */
 enum tf_tcam_tbl_type {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index f4b2f4c..b9ed127 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,87 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_set_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_set_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.size = tfp_cpu_to_le_16(size);
+	req.index = tfp_cpu_to_le_32(index);
+
+	tfp_memcpy(&req.data,
+		   data,
+		   size);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_TBL_TYPE_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_get_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_input req = { 0 };
+	struct tf_tbl_type_get_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.index = tfp_cpu_to_le_32(index);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < size)
+		return -EINVAL;
+
+	tfp_memcpy(data,
+		   &resp.data,
+		   resp.size);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 #define TF_BYTES_PER_SLICE(tfp) 12
 #define NUM_SLICES(tfp, bytes) \
 	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fa74d78..9055b16 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,6 +6,7 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include "tf_tbl.h"
 #include "tf_rm.h"
 
 struct tf;
@@ -150,4 +151,66 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
 int tf_msg_tcam_entry_free(struct tf *tfp,
 			   struct tf_free_tcam_entry_parms *parms);
 
+/**
+ * Sends Set message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to set
+ *
+ * [in] type
+ *   Type of the object to set
+ *
+ * [in] size
+ *   Size of the data to set
+ *
+ * [in] data
+ *   Data to set
+ *
+ * [in] index
+ *   Index to set
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_set_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
+/**
+ * Sends get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to get
+ *
+ * [in] type
+ *   Type of the object to get
+ *
+ * [in] size
+ *   Size of the data read
+ *
+ * [in] data
+ *   Data read
+ *
+ * [in] index
+ *   Index to get
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_get_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
new file mode 100644
index 0000000..14bf4ef
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Truflow Table APIs and supporting code */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include "hsi_struct_def_dpdk.h"
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "hwrm_tf.h"
+#include "bnxt.h"
+#include "tf_resources.h"
+#include "tf_rm.h"
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+/* API defined in tf_tbl.h */
+void
+tf_init_tbl_pool(struct tf_session *session)
+{
+	enum tf_dir dir;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		session->ext_pool_2_scope[dir][TF_EXT_POOL_0] =
+			TF_TBL_SCOPE_INVALID;
+	}
+}
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 12/34] net/bnxt: add EM/EEM functionality
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (10 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
                     ` (23 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add TruFlow flow memory support
- Exact Match (EM) adds the capability to manage and manipulate
  data flows using on chip memory.
- Extended Exact Match (EEM) behaves similarly to EM, but at a
  vastly increased scale by using host DDR, with performance
  tradeoff due to the need to access off-chip memory.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |    2 +
 drivers/net/bnxt/tf_core/lookup3.h            |  161 +++
 drivers/net/bnxt/tf_core/stack.c              |  107 ++
 drivers/net/bnxt/tf_core/stack.h              |  107 ++
 drivers/net/bnxt/tf_core/tf_core.c            |   51 +
 drivers/net/bnxt/tf_core/tf_core.h            |  480 ++++++-
 drivers/net/bnxt/tf_core/tf_em.c              |  516 +++++++
 drivers/net/bnxt/tf_core/tf_em.h              |  117 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |  166 +++
 drivers/net/bnxt/tf_core/tf_msg.c             |  171 +++
 drivers/net/bnxt/tf_core/tf_msg.h             |   40 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1795 ++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   83 ++
 13 files changed, 3789 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 6714a6a..4c95847 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
new file mode 100644
index 0000000..b1fd2cd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Based on lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ * http://www.burtleburtle.net/bob/c/lookup3.c
+ *
+ * These functions for producing 32-bit hashes for has table lookup.
+ * hashword(), hashlittle(), hashlittle2(), hashbig(), mix(), and final()
+ * are externally useful functions. Routines to test the hash are included
+ * if SELF_TEST is defined. You can use this free for any purpose. It is in
+ * the public domain. It has no warranty.
+ */
+
+#ifndef _LOOKUP3_H_
+#define _LOOKUP3_H_
+
+#define rot(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
+
+/** -------------------------------------------------------------------------
+ * This is reversible, so any information in (a,b,c) before mix() is
+ * still in (a,b,c) after mix().
+ *
+ * If four pairs of (a,b,c) inputs are run through mix(), or through
+ * mix() in reverse, there are at least 32 bits of the output that
+ * are sometimes the same for one pair and different for another pair.
+ * This was tested for:
+ *   pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+ * satisfy this are
+ *     4  6  8 16 19  4
+ *     9 15  3 18 27 15
+ *    14  9  3  7 17  3
+ * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
+ * for "differ" defined as + with a one-bit base and a two-bit delta.  I
+ * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+ * the operations, constants, and arrangements of the variables.
+ *
+ * This does not achieve avalanche.  There are input bits of (a,b,c)
+ * that fail to affect some output bits of (a,b,c), especially of a.  The
+ * most thoroughly mixed value is c, but it doesn't really even achieve
+ * avalanche in c.
+ *
+ * This allows some parallelism.  Read-after-writes are good at doubling
+ * the number of bits affected, so the goal of mixing pulls in the opposite
+ * direction as the goal of parallelism.  I did what I could.  Rotates
+ * seem to cost as much as shifts on every machine I could lay my hands
+ * on, and rotates are much kinder to the top and bottom bits, so I used
+ * rotates.
+ * --------------------------------------------------------------------------
+ */
+#define mix(a, b, c) \
+{ \
+	(a) -= (c); (a) ^= rot((c), 4);  (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 6);  (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 8);  (b) += a; \
+	(a) -= (c); (a) ^= rot((c), 16); (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 19); (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 4);  (b) += a; \
+}
+
+/** --------------------------------------------------------------------------
+ * final -- final mixing of 3 32-bit values (a,b,c) into c
+ *
+ * Pairs of (a,b,c) values differing in only a few bits will usually
+ * produce values of c that look totally different.  This was tested for
+ *  pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * These constants passed:
+ *  14 11 25 16 4 14 24
+ *  12 14 25 16 4 14 24
+ * and these came close:
+ *   4  8 15 26 3 22 24
+ *  10  8 15 26 3 22 24
+ *  11  8 15 26 3 22 24
+ * --------------------------------------------------------------------------
+ */
+#define final(a, b, c) \
+{ \
+	(c) ^= (b); (c) -= rot((b), 14); \
+	(a) ^= (c); (a) -= rot((c), 11); \
+	(b) ^= (a); (b) -= rot((a), 25); \
+	(c) ^= (b); (c) -= rot((b), 16); \
+	(a) ^= (c); (a) -= rot((c), 4);  \
+	(b) ^= (a); (b) -= rot((a), 14); \
+	(c) ^= (b); (c) -= rot((b), 24); \
+}
+
+/** --------------------------------------------------------------------
+ *  This works on all machines.  To be useful, it requires
+ *  -- that the key be an array of uint32_t's, and
+ *  -- that the length be the number of uint32_t's in the key
+
+ *  The function hashword() is identical to hashlittle() on little-endian
+ *  machines, and identical to hashbig() on big-endian machines,
+ *  except that the length has to be measured in uint32_ts rather than in
+ *  bytes. hashlittle() is more complicated than hashword() only because
+ *  hashlittle() has to dance around fitting the key bytes into registers.
+ *
+ *  Input Parameters:
+ *	 key: an array of uint32_t values
+ *	 length: the length of the key, in uint32_ts
+ *	 initval: the previous hash, or an arbitrary value
+ * --------------------------------------------------------------------
+ */
+static inline uint32_t hashword(const uint32_t *k,
+				size_t length,
+				uint32_t initval) {
+	uint32_t a, b, c;
+	int index = 12;
+
+	/* Set up the internal state */
+	a = 0xdeadbeef + (((uint32_t)length) << 2) + initval;
+	b = a;
+	c = a;
+
+	/*-------------------------------------------- handle most of the key */
+	while (length > 3) {
+		a += k[index];
+		b += k[index - 1];
+		c += k[index - 2];
+		mix(a, b, c);
+		length -= 3;
+		index -= 3;
+	}
+
+	/*-------------------------------------- handle the last 3 uint32_t's */
+	switch (length) {	      /* all the case statements fall through */
+	case 3:
+		c += k[index - 2];
+		/* Falls through. */
+	case 2:
+		b += k[index - 1];
+		/* Falls through. */
+	case 1:
+		a += k[index];
+		final(a, b, c);
+		/* Falls through. */
+	case 0:	    /* case 0: nothing left to add */
+		break;
+	}
+	/*------------------------------------------------- report the result */
+	return c;
+}
+
+#endif /* _LOOKUP3_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
new file mode 100644
index 0000000..3337073
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <errno.h>
+#include "stack.h"
+
+#define STACK_EMPTY -1
+
+/* Initialize stack
+ */
+int
+stack_init(int num_entries, uint32_t *items, struct stack *st)
+{
+	if (items == NULL || st == NULL)
+		return -EINVAL;
+
+	st->max = num_entries;
+	st->top = STACK_EMPTY;
+	st->items = items;
+
+	return 0;
+}
+
+/* Return the size of the stack
+ */
+int32_t
+stack_size(struct stack *st)
+{
+	return st->top + 1;
+}
+
+/* Check if the stack is empty
+ */
+bool
+stack_is_empty(struct stack *st)
+{
+	return st->top == STACK_EMPTY;
+}
+
+/* Check if the stack is full
+ */
+bool
+stack_is_full(struct stack *st)
+{
+	return st->top == st->max - 1;
+}
+
+/* Add  element x to  the stack
+ */
+int
+stack_push(struct stack *st, uint32_t x)
+{
+	if (stack_is_full(st))
+		return -EOVERFLOW;
+
+	/* add an element and increments the top index
+	 */
+	st->items[++st->top] = x;
+
+	return 0;
+}
+
+/* Pop top element x from the stack and return
+ * in user provided location.
+ */
+int
+stack_pop(struct stack *st, uint32_t *x)
+{
+	if (stack_is_empty(st))
+		return -ENODATA;
+
+	*x = st->items[st->top];
+	st->top--;
+
+	return 0;
+}
+
+/* Dump the stack
+ */
+void stack_dump(struct stack *st)
+{
+	int i, j;
+
+	printf("top=%d\n", st->top);
+	printf("max=%d\n", st->max);
+
+	if (st->top == -1) {
+		printf("stack is empty\n");
+		return;
+	}
+
+	for (i = 0; i < st->max + 7 / 8; i++) {
+		printf("item[%d] 0x%08x", i, st->items[i]);
+
+		for (j = 0; j < 7; j++) {
+			if (i++ < st->max - 1)
+				printf(" 0x%08x", st->items[i]);
+		}
+		printf("\n");
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
new file mode 100644
index 0000000..6fe8829
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _STACK_H_
+#define _STACK_H_
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/** Stack data structure
+ */
+struct stack {
+	int max;         /**< Maximum number of entries */
+	int top;         /**< maximum value in stack */
+	uint32_t *items; /**< items in the stack */
+};
+
+/** Initialize stack of uint32_t elements
+ *
+ *  [in] num_entries
+ *    maximum number of elemnts in the stack
+ *
+ *  [in] items
+ *    pointer to items (must be sized to (uint32_t * num_entries)
+ *
+ *  s[in] st
+ *    pointer to the stack structure
+ *
+ *  return
+ *    0 for success
+ */
+int stack_init(int num_entries,
+	       uint32_t *items,
+	       struct stack *st);
+
+/** Return the size of the stack
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    number of elements
+ */
+int32_t stack_size(struct stack *st);
+
+/** Check if the stack is empty
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_empty(struct stack *st);
+
+/** Check if the stack is full
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_full(struct stack *st);
+
+/** Add  element x to  the stack
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in] x
+ *   value to push on the stack
+ * return
+ *  0 for success
+ */
+int stack_push(struct stack *st, uint32_t x);
+
+/** Pop top element x from the stack and return
+ * in user provided location.
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in, out] x
+ *  pointer to where the value popped will be written
+ *
+ * return
+ *  0 for success
+ */
+int stack_pop(struct stack *st, uint32_t *x);
+
+/** Dump stack information
+ *
+ * Warning: Don't use for large stacks due to prints
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *    none
+ */
+void stack_dump(struct stack *st);
+
+#endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 2833de2..8f037a2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -8,6 +8,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
+#include "tf_em.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -288,6 +289,56 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Process the EM entry per Table Scope type */
+	return tf_insert_eem_entry(
+		(struct tf_session *)(tfp->session->core_data),
+		tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	return tf_delete_eem_entry(tfp, parms);
+}
+
 /** allocate identifier resource
  *
  * Returns success or failure code.
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 4c90677..34e643c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -21,6 +21,10 @@
 
 /********** BEGIN Truflow Core DEFINITIONS **********/
 
+
+#define TF_KILOBYTE  1024
+#define TF_MEGABYTE  (1024 * 1024)
+
 /**
  * direction
  */
@@ -31,6 +35,27 @@ enum tf_dir {
 };
 
 /**
+ * memory choice
+ */
+enum tf_mem {
+	TF_MEM_INTERNAL, /**< Internal */
+	TF_MEM_EXTERNAL, /**< External */
+	TF_MEM_MAX
+};
+
+/**
+ * The size of the external action record (Wh+/Brd2)
+ *
+ * Currently set to 512.
+ *
+ * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
+ * + stats (16) = 304 aligned on a 16B boundary
+ *
+ * Theoretically, the size should be smaller. ~304B
+ */
+#define TF_ACTION_RECORD_SZ 512
+
+/**
  * External pool size
  *
  * Defines a single pool of external action records of
@@ -56,6 +81,23 @@ enum tf_dir {
 #define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
 #define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
 
+/** EEM record AR helper
+ *
+ * Helpers to handle the Action Record Pointer in the EEM Record Entry.
+ *
+ * Convert absolute offset to action record pointer in EEM record entry
+ * Convert action record pointer in EEM record entry to absolute offset
+ */
+#define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
+#define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
+
+#define TF_ACT_REC_INDEX_2_OFFSET(idx) ((idx) << 9)
+
+/*
+ * Helper Macros
+ */
+#define TF_BITS_2_BYTES(num_bits) (((num_bits) + 7) / 8)
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
@@ -495,7 +537,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -517,7 +559,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -536,7 +578,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -546,7 +588,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -563,7 +605,7 @@ struct tf_free_tbl_scope_parms {
  * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
  * hash table buckets are stored at the beginning of that memory.
  *
- * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * NOTE:  No EM internal setup is done here. On chip EM records are managed
  * internally by TruFlow core.
  *
  * Returns success or failure code.
@@ -577,7 +619,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -905,4 +947,430 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_EXT_0,
 	TF_TBL_TYPE_MAX
 };
+
+/** tf_alloc_tbl_entry parameter definition
+ */
+struct tf_alloc_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/** allocate index table entries
+ *
+ * Internal types:
+ *
+ * Allocate an on chip index table entry or search for a matching
+ * entry of the indicated type for this TruFlow session.
+ *
+ * Allocates an index table record. This function will attempt to
+ * allocate an entry or search an index table for a matching entry if
+ * search is enabled (only the shadow copy of the table is accessed).
+ *
+ * If search is not enabled, the first available free entry is
+ * returned. If search is enabled and a matching entry to entry_data
+ * is found hit is set to TRUE and success is returned.
+ *
+ * External types:
+ *
+ * These are used to allocate inlined action record memory.
+ *
+ * Allocates an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_entry(struct tf *tfp,
+		       struct tf_alloc_tbl_entry_parms *parms);
+
+/** tf_free_tbl_entry parameter definition
+ */
+struct tf_free_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free index table entry
+ *
+ * Used to free a previously allocated table entry.
+ *
+ * Internal types:
+ *
+ * If session has shadow_copy enabled the shadow DB is searched and if
+ * found the element ref_cnt is decremented. If ref_cnt goes to
+ * zero then the element is returned to the session pool.
+ *
+ * If the session does not have a shadow DB the element is free'ed and
+ * given back to the session pool.
+ *
+ * External types:
+ *
+ * Free's an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_entry(struct tf *tfp,
+		      struct tf_free_tbl_entry_parms *parms);
+
+/** tf_set_tbl_entry parameter definition
+ */
+struct tf_set_tbl_entry_parms {
+	/**
+	 * [in] Table scope identifier
+	 *
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/** set index table entry
+ *
+ * Used to insert an application programmed index table entry into a
+ * previous allocated table location.  A shadow copy of the table
+ * is maintained (if enabled) (only for internal objects)
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tbl_entry(struct tf *tfp,
+		     struct tf_set_tbl_entry_parms *parms);
+
+/** tf_get_tbl_entry parameter definition
+ */
+struct tf_get_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/** get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_tbl_entry(struct tf *tfp,
+		     struct tf_get_tbl_entry_parms *parms);
+
+/**
+ * @page exact_match Exact Match Table
+ *
+ * @ref tf_insert_em_entry
+ *
+ * @ref tf_delete_em_entry
+ *
+ * @ref tf_search_em_entry
+ *
+ */
+/** tf_insert_em_entry parameter definition
+ */
+struct tf_insert_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] ptr to structure containing result field
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] duplicate check flag
+	 */
+	uint8_t	dup_check;
+	/**
+	 * [out] Flow handle value for the inserted entry.  This is encoded
+	 * as the entries[4]:bucket[2]:hashId[1]:hash[14]
+	 */
+	uint64_t flow_handle;
+	/**
+	 * [out] Flow id is returned as null (internal)
+	 * Flow id is the GFID value for the inserted entry (external)
+	 * This is the value written to the BD and useful information for mark.
+	 */
+	uint64_t flow_id;
+};
+/**
+ * tf_delete_em_entry parameter definition
+ */
+struct tf_delete_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] epoch group IDs of entry to delete
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] structure containing flow delete handle information
+	 */
+	uint64_t flow_handle;
+};
+/**
+ * tf_search_em_entry parameter definition
+ */
+struct tf_search_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in/out] ptr to structure containing EM record fields
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] epoch group IDs of entry to lookup
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] ptr to structure containing flow delete handle
+	 */
+	uint64_t flow_handle;
+};
+
+/** insert em hash entry in internal table memory
+ *
+ * Internal:
+ *
+ * This API inserts an exact match entry into internal EM table memory
+ * of the specified direction.
+ *
+ * Note: The EM record is managed within the TruFlow core and not the
+ * application.
+ *
+ * Shadow copy of internal record table an association with hash and 1,2, or 4
+ * associated buckets
+ *
+ * External:
+ * This API inserts an exact match entry into DRAM EM table memory of the
+ * specified direction and table scope.
+ *
+ * When inserting an entry into an exact match table, the TruFlow library may
+ * need to allocate a dynamic bucket for the entry (Brd4 only).
+ *
+ * The insertion of duplicate entries in an EM table is not permitted.	If a
+ * TruFlow application can guarantee that it will never insert duplicates, it
+ * can disable duplicate checking by passing a zero value in the  dup_check
+ * parameter to this API.  This will optimize performance. Otherwise, the
+ * TruFlow library will enforce protection against inserting duplicate entries.
+ *
+ * Flow handle is defined in this document:
+ *
+ * https://docs.google.com
+ * /document/d/1NESu7RpTN3jwxbokaPfYORQyChYRmJgs40wMIRe8_-Q/edit
+ *
+ * Returns success or busy code.
+ *
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
+
+/** delete em hash entry table memory
+ *
+ * Internal:
+ *
+ * This API deletes an exact match entry from internal EM table memory of the
+ * specified direction. If a valid flow ptr is passed in then that takes
+ * precedence over the pointer to the complete key passed in.
+ *
+ *
+ * External:
+ *
+ * This API deletes an exact match entry from EM table memory of the specified
+ * direction and table scope. If a valid flow handle is passed in then that
+ * takes precedence over the pointer to the complete key passed in.
+ *
+ * The TruFlow library may release a dynamic bucket when an entry is deleted.
+ *
+ *
+ * Returns success or not found code
+ *
+ *
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
+
+/** search em hash entry table memory
+ *
+ * Internal:
+
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow (flow takes precedence) and direction.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * If flow_handle is set, search shadow copy.
+ *
+ * Otherwise, query the fw with key to get result.
+ *
+ * External:
+ *
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow_handle (flow takes precedence), direction and table scope.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * Returns success or not found code
+ *
+ */
+int tf_search_em_entry(struct tf *tfp,
+		       struct tf_search_em_entry_parms *parms);
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
new file mode 100644
index 0000000..7109eb1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -0,0 +1,516 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/* Enable EEM table dump
+ */
+#define TF_EEM_DUMP
+
+static struct tf_eem_64b_entry zero_key_entry;
+
+static uint32_t tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & 0x7FFF)
+		return 0;
+
+	if (num_entries > (128 * 1024 * 1024))
+		return 0;
+
+	return mask;
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+	}
+
+	return ~crc;
+}
+
+static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
+					  uint8_t *key,
+					  enum tf_dir dir)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = session->lkup_em_seed_mem[dir][index * 2];
+	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = crc32i(~val1, temp, 4);
+
+	val1 = crc32i(~val1,
+		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
+		      TF_HW_EM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
+					    uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((uint32_t *)in_key) + 1,
+			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 lookup3_init_value);
+
+	return val1;
+}
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	void *addr = NULL;
+	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
+
+	if (ctx == NULL)
+		return NULL;
+
+	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
+		return NULL;
+
+	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+		return NULL;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = ctx->em_tables[table_type].num_lvl - 1;
+
+	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Read Key table entry
+ *
+ * Entry is read in to entry
+ */
+static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
+	return 0;
+}
+
+static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
+
+	return 0;
+}
+
+static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_eem_64b_entry *entry,
+			       uint32_t index,
+			       enum tf_em_table_type table_type,
+			       enum tf_dir dir)
+{
+	int rc;
+	struct tf_eem_64b_entry table_entry;
+
+	rc = tf_em_read_entry(tbl_scope_cb,
+			      &table_entry,
+			      TF_EM_KEY_RECORD_SIZE,
+			      index,
+			      table_type,
+			      dir);
+
+	if (rc != 0)
+		return -EINVAL;
+
+	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
+		if (entry != NULL) {
+			if (memcmp(&table_entry,
+				   entry,
+				   TF_EM_KEY_RECORD_SIZE) == 0)
+				return -EEXIST;
+		} else {
+			return -EEXIST;
+		}
+
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
+				    uint8_t	       *in_key,
+				    struct tf_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
+
+/* tf_em_select_inject_table
+ *
+ * Returns:
+ * 0 - Key does not exist in either table and can be inserted
+ *		at "index" in table "table".
+ * EEXIST  - Key does exist in table at "index" in table "table".
+ * TF_ERR     - Something went horribly wrong.
+ */
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+					  enum tf_dir dir,
+					  struct tf_eem_64b_entry *entry,
+					  uint32_t key0_hash,
+					  uint32_t key1_hash,
+					  uint32_t *index,
+					  enum tf_em_table_type *table)
+{
+	int key0_entry;
+	int key1_entry;
+
+	/*
+	 * Check KEY0 table.
+	 */
+	key0_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key0_hash,
+					 KEY0_TABLE,
+					 dir);
+
+	/*
+	 * Check KEY1 table.
+	 */
+	key1_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key1_hash,
+					 KEY1_TABLE,
+					 dir);
+
+	if (key0_entry == -EEXIST) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return -EEXIST;
+	} else if (key1_entry == -EEXIST) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return -EEXIST;
+	} else if (key0_entry == 0) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return 0;
+	} else if (key1_entry == 0) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+int tf_insert_eem_entry(struct tf_session	   *session,
+			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t	   mask;
+	uint32_t	   key0_hash;
+	uint32_t	   key1_hash;
+	uint32_t	   key0_index;
+	uint32_t	   key1_index;
+	struct tf_eem_64b_entry key_entry;
+	uint32_t	   index;
+	enum tf_em_table_type table_type;
+	uint32_t	   gfid;
+	int		   num_of_entry;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+
+	key0_hash = tf_em_lkup_get_crc32_hash(session,
+				      &parms->key[num_of_entry] - 1,
+				      parms->dir);
+	key0_index = key0_hash & mask;
+
+	key1_hash =
+	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
+					parms->key);
+	key1_index = key1_hash & mask;
+
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Find which table to use
+	 */
+	if (tf_em_select_inject_table(tbl_scope_cb,
+				      parms->dir,
+				      &key_entry,
+				      key0_index,
+				      key1_index,
+				      &index,
+				      &table_type) == 0) {
+		if (table_type == KEY0_TABLE) {
+			TF_SET_GFID(gfid,
+				    key0_index,
+				    KEY0_TABLE);
+		} else {
+			TF_SET_GFID(gfid,
+				    key1_index,
+				    KEY1_TABLE);
+		}
+
+		/*
+		 * Inject
+		 */
+		if (tf_em_write_entry(tbl_scope_cb,
+				      &key_entry,
+				      TF_EM_KEY_RECORD_SIZE,
+				      index,
+				      table_type,
+				      parms->dir) == 0) {
+			TF_SET_FLOW_ID(parms->flow_id,
+				       gfid,
+				       TF_GFID_TABLE_EXTERNAL,
+				       parms->dir);
+			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+						     0,
+						     0,
+						     0,
+						     index,
+						     0,
+						     table_type);
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_session	   *session;
+	struct tf_tbl_scope_cb	   *tbl_scope_cb;
+	enum tf_em_table_type hash_type;
+	uint32_t index;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	if (session == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	if (tf_em_entry_exists(tbl_scope_cb,
+			       NULL,
+			       index,
+			       hash_type,
+			       parms->dir) == -EEXIST) {
+		tf_em_write_entry(tbl_scope_cb,
+				  &zero_key_entry,
+				  TF_EM_KEY_RECORD_SIZE,
+				  index,
+				  hash_type,
+				  parms->dir);
+
+		return 0;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
new file mode 100644
index 0000000..8a3584f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_H_
+#define _TF_EM_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+#define TF_HW_EM_KEY_MAX_SIZE 52
+#define TF_EM_KEY_RECORD_SIZE 64
+
+/** EEM Entry header
+ *
+ */
+struct tf_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define TF_LKUP_RECORD_VALID_SHIFT 31
+#define TF_LKUP_RECORD_VALID_MASK 0x80000000
+#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
+#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
+#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
+#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
+#define TF_LKUP_RECORD_RESERVED_SHIFT 17
+#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
+#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
+#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
+#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
+#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
+#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
+#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
+#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
+#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/** EEM Entry
+ *  Each EEM entry is 512-bit (64-bytes)
+ */
+struct tf_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+};
+
+/**
+ * Allocates EEM Table scope
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ *   -ENOMEM - Out of memory
+ */
+int tf_alloc_eem_tbl_scope(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free's EEM Table scope control block
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			     struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *  table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms);
+
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms);
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type);
+
+#endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
new file mode 100644
index 0000000..417a99c
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EXT_FLOW_HANDLE_H_
+#define _TF_EXT_FLOW_HANDLE_H_
+
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK	0x00000000F0000000ULL
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT	28
+#define TF_FLOW_TYPE_FLOW_HANDLE_MASK		0x00000000000000F0ULL
+#define TF_FLOW_TYPE_FLOW_HANDLE_SFT		4
+#define TF_FLAGS_FLOW_HANDLE_MASK		0x000000000000000FULL
+#define TF_FLAGS_FLOW_HANDLE_SFT		0
+#define TF_INDEX_FLOW_HANDLE_MASK		0xFFFFFFF000000000ULL
+#define TF_INDEX_FLOW_HANDLE_SFT		36
+#define TF_ENTRY_NUM_FLOW_HANDLE_MASK		0x0000000E00000000ULL
+#define TF_ENTRY_NUM_FLOW_HANDLE_SFT		33
+#define TF_HASH_TYPE_FLOW_HANDLE_MASK		0x0000000100000000ULL
+#define TF_HASH_TYPE_FLOW_HANDLE_SFT		32
+
+#define TF_FLOW_HANDLE_MASK (TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK |	\
+				TF_FLOW_TYPE_FLOW_HANDLE_MASK |		\
+				TF_FLAGS_FLOW_HANDLE_MASK |		\
+				TF_INDEX_FLOW_HANDLE_MASK |		\
+				TF_ENTRY_NUM_FLOW_HANDLE_MASK |		\
+				TF_HASH_TYPE_FLOW_HANDLE_MASK)
+
+#define TF_GET_FIELDS_FROM_FLOW_HANDLE(flow_handle,			\
+				       num_key_entries,			\
+				       flow_type,			\
+				       flags,				\
+				       index,				\
+				       entry_num,			\
+				       hash_type)			\
+do {									\
+	(num_key_entries) = \
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT);			\
+	(flow_type) = (((flow_handle) & TF_FLOW_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_FLOW_TYPE_FLOW_HANDLE_SFT);			\
+	(flags) = (((flow_handle) & TF_FLAGS_FLOW_HANDLE_MASK) >>	\
+		     TF_FLAGS_FLOW_HANDLE_SFT);				\
+	(index) = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>	\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+	(entry_num) = (((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT);			\
+	(hash_type) = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+#define TF_SET_FIELDS_IN_FLOW_HANDLE(flow_handle,			\
+				     num_key_entries,			\
+				     flow_type,				\
+				     flags,				\
+				     index,				\
+				     entry_num,				\
+				     hash_type)				\
+do {									\
+	(flow_handle) &= ~TF_FLOW_HANDLE_MASK;				\
+	(flow_handle) |= \
+		(((num_key_entries) << TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT) & \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flow_type) << TF_FLOW_TYPE_FLOW_HANDLE_SFT) & \
+			TF_FLOW_TYPE_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flags) << TF_FLAGS_FLOW_HANDLE_SFT) &	\
+			TF_FLAGS_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= ((((uint64_t)index) << TF_INDEX_FLOW_HANDLE_SFT) & \
+			TF_INDEX_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)entry_num) << TF_ENTRY_NUM_FLOW_HANDLE_SFT) & \
+		 TF_ENTRY_NUM_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)hash_type) << TF_HASH_TYPE_FLOW_HANDLE_SFT) & \
+		 TF_HASH_TYPE_FLOW_HANDLE_MASK);			\
+} while (0)
+#define TF_SET_FIELDS_IN_WH_FLOW_HANDLE TF_SET_FIELDS_IN_FLOW_HANDLE
+
+#define TF_GET_INDEX_FROM_FLOW_HANDLE(flow_handle,			\
+				      index)				\
+do {									\
+	index = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>		\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(flow_handle,			\
+					  hash_type)			\
+do {									\
+	hash_type = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >>	\
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+/*
+ * 32 bit Flow ID handlers
+ */
+#define TF_GFID_FLOW_ID_MASK		0xFFFFFFF0UL
+#define TF_GFID_FLOW_ID_SFT		4
+#define TF_FLAG_FLOW_ID_MASK		0x00000002UL
+#define TF_FLAG_FLOW_ID_SFT		1
+#define TF_DIR_FLOW_ID_MASK		0x00000001UL
+#define TF_DIR_FLOW_ID_SFT		0
+
+#define TF_SET_FLOW_ID(flow_id, gfid, flag, dir)			\
+do {									\
+	(flow_id) &= ~(TF_GFID_FLOW_ID_MASK |				\
+		     TF_FLAG_FLOW_ID_MASK |				\
+		     TF_DIR_FLOW_ID_MASK);				\
+	(flow_id) |= (((gfid) << TF_GFID_FLOW_ID_SFT) &			\
+		    TF_GFID_FLOW_ID_MASK) |				\
+		(((flag) << TF_FLAG_FLOW_ID_SFT) &			\
+		 TF_FLAG_FLOW_ID_MASK) |				\
+		(((dir) << TF_DIR_FLOW_ID_SFT) &			\
+		 TF_DIR_FLOW_ID_MASK);					\
+} while (0)
+
+#define TF_GET_GFID_FROM_FLOW_ID(flow_id, gfid)				\
+do {									\
+	gfid = (((flow_id) & TF_GFID_FLOW_ID_MASK) >>			\
+		TF_GFID_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_DIR_FROM_FLOW_ID(flow_id, dir)				\
+do {									\
+	dir = (((flow_id) & TF_DIR_FLOW_ID_MASK) >>			\
+		TF_DIR_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_FLAG_FROM_FLOW_ID(flow_id, flag)				\
+do {									\
+	flag = (((flow_id) & TF_FLAG_FLOW_ID_MASK) >>			\
+		TF_FLAG_FLOW_ID_SFT);					\
+} while (0)
+
+/*
+ * 32 bit GFID handlers
+ */
+#define TF_HASH_INDEX_GFID_MASK	0x07FFFFFFUL
+#define TF_HASH_INDEX_GFID_SFT	0
+#define TF_HASH_TYPE_GFID_MASK	0x08000000UL
+#define TF_HASH_TYPE_GFID_SFT	27
+
+#define TF_GFID_TABLE_INTERNAL 0
+#define TF_GFID_TABLE_EXTERNAL 1
+
+#define TF_SET_GFID(gfid, index, type)					\
+do {									\
+	gfid = (((index) << TF_HASH_INDEX_GFID_SFT) &			\
+		TF_HASH_INDEX_GFID_MASK) |				\
+		(((type) << TF_HASH_TYPE_GFID_SFT) &			\
+		 TF_HASH_TYPE_GFID_MASK);				\
+} while (0)
+
+#define TF_GET_HASH_INDEX_FROM_GFID(gfid, index)			\
+do {									\
+	index = (((gfid) & TF_HASH_INDEX_GFID_MASK) >>			\
+		TF_HASH_INDEX_GFID_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_GFID(gfid, type)				\
+do {									\
+	type = (((gfid) & TF_HASH_TYPE_GFID_MASK) >>			\
+		TF_HASH_TYPE_GFID_SFT);					\
+} while (0)
+
+
+#endif /* _TF_EXT_FLOW_HANDLE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b9ed127..c507ec7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,177 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*ctx_id = tfp_le_to_cpu_16(resp.ctx_id);
+
+	return rc;
+}
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t  *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
+	struct hwrm_tf_ctxt_mem_unrgtr_output resp = {0};
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.ctx_id = tfp_cpu_to_le_32(*ctx_id);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_UNRGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps)
+{
+	int rc;
+	struct hwrm_tf_ext_em_qcaps_input  req = {0};
+	struct hwrm_tf_ext_em_qcaps_output resp = { 0 };
+	uint32_t             flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+
+	parms.tf_type = HWRM_TF_EXT_EM_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_caps->supported = tfp_le_to_cpu_32(resp.supported);
+	em_caps->max_entries_supported =
+		tfp_le_to_cpu_32(resp.max_entries_supported);
+	em_caps->key_entry_size = tfp_le_to_cpu_16(resp.key_entry_size);
+	em_caps->record_entry_size =
+		tfp_le_to_cpu_16(resp.record_entry_size);
+	em_caps->efc_entry_size = tfp_le_to_cpu_16(resp.efc_entry_size);
+
+	return rc;
+}
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t   num_entries,
+		  uint16_t   key0_ctx_id,
+		  uint16_t   key1_ctx_id,
+		  uint16_t   record_ctx_id,
+		  uint16_t   efc_ctx_id,
+		  int        dir)
+{
+	int rc;
+	struct hwrm_tf_ext_em_cfg_input  req = {0};
+	struct hwrm_tf_ext_em_cfg_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	flags |= HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD;
+
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
+
+	req.key0_ctx_id = tfp_cpu_to_le_16(key0_ctx_id);
+	req.key1_ctx_id = tfp_cpu_to_le_16(key1_ctx_id);
+	req.record_ctx_id = tfp_cpu_to_le_16(record_ctx_id);
+	req.efc_ctx_id = tfp_cpu_to_le_16(efc_ctx_id);
+
+	parms.tf_type = HWRM_TF_EXT_EM_CFG;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op)
+{
+	int rc;
+	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
+
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 9055b16..b8d8c1e 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -122,6 +122,46 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id);
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t     *ctx_id);
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps);
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t      num_entries,
+		  uint16_t      key0_ctx_id,
+		  uint16_t      key1_ctx_id,
+		  uint16_t      record_ctx_id,
+		  uint16_t      efc_ctx_id,
+		  int           dir);
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op);
+
+/**
  * Sends tcam entry 'set' to the Firmware.
  *
  * [in] tfp
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 14bf4ef..632df4b 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,7 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
-#include "tf_session.h"
+#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -30,6 +30,1366 @@
 /* Number of pointers per page_size */
 #define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define	TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			PMD_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		PMD_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uint64_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			PMD_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d\n",
+				i);
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			PMD_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(uint32_t *)tp->pg_va_tbl[j],
+				(uint32_t *)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct tf_em_page_tbl *tp,
+		      struct tf_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == PT_LVL_0) {
+		page_cnt[PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == PT_LVL_1) {
+		page_cnt[PT_LVL_1] = num_data_pages;
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else if (max_lvl == PT_LVL_2) {
+		page_cnt[PT_LVL_2] = num_data_pages;
+		page_cnt[PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct tf_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		PMD_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type,
+			    (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	PMD_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[PT_LVL_0],
+		    page_cnt[PT_LVL_1],
+		    page_cnt[PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries
+		= parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries
+		= 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries
+		= parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries
+		= 0;
+
+	return 0;
+}
+
+/**
+ * Internal function to set a Table Entry. Supports all internal Table Types
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_set_tbl_entry_internal(struct tf *tfp,
+			  struct tf_set_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Set failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Get failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+#if (TF_SHADOW == 1)
+/**
+ * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is incremente and
+ * returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count incremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
+			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Alloc with search not supported\n",
+		    parms->dir);
+
+
+	return -EOPNOTSUPP;
+}
+
+/**
+ * Free Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is decremente and
+ * new ref_count returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_free_tbl_entry_shadow(struct tf_session *tfs,
+			 struct tf_free_tbl_entry_parms *parms)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Free with search not supported\n",
+		    parms->dir);
+
+	return -EOPNOTSUPP;
+}
+#endif /* TF_SHADOW */
+
+/**
+ * Create External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ * [in] tbl_scope_id
+ *   id of the table scope
+ * [in] num_entries
+ *   number of entries to write
+ * [in] entry_sz_bytes
+ *   size of each entry
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t table_scope_id,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_pool[dir][TF_EXT_POOL_0];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
+			    dir, strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0] =
+		(uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries * entry_sz_bytes - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+				    dir, strerror(-rc));
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = table_scope_id;
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ *
+ */
+static void
+tf_destroy_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_pool_mem =
+		tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0];
+
+	tfp_free(ext_pool_mem);
+
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = TF_TBL_SCOPE_INVALID;
+}
+
+/**
+ * Allocate External Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_external(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_session *tfs;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope not allocated\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d\n",
+		   parms->dir,
+		   parms->type);
+		return rc;
+	}
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Allocate Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	int free_cnt;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	id = ba_alloc(session_pool);
+	if (id == -1) {
+		free_cnt = ba_free_count(session_pool);
+
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d, free:%d\n",
+		   parms->dir,
+		   parms->type,
+		   free_cnt);
+		return -ENOMEM;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_ADD_BASE,
+			    id,
+			    &index);
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the session pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+static int
+tf_free_tbl_entry_pool_external(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	uint32_t index;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope error\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
+/**
+ * Free Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_free_tbl_entry_pool_internal(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	int id;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	uint32_t index;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Check if element was indeed allocated */
+	id = ba_inuse_free(session_pool, index);
+	if (id == -1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -ENOMEM;
+	}
+
+	return rc;
+}
+
 /* API defined in tf_tbl.h */
 void
 tf_init_tbl_pool(struct tf_session *session)
@@ -41,3 +1401,436 @@ tf_init_tbl_pool(struct tf_session *session)
 			TF_TBL_SCOPE_INVALID;
 	}
 }
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+
+	/* Check that id is valid */
+	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
+	if (i < 0)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			 struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(session,
+					     dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_em.h */
+int
+tf_alloc_eem_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_table *em_tables;
+	int index;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+
+	/* check parameters */
+	if (parms == NULL || tfp->session == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	index = ba_alloc(session->tbl_scope_pool_rx);
+	if (index == -1) {
+		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+			    "Control Block\n");
+		return -ENOMEM;
+	}
+
+	tbl_scope_cb = &session->tbl_scopes[index];
+	tbl_scope_cb->index = index;
+	tbl_scope_cb->tbl_scope_id = index;
+	parms->tbl_scope_id = index;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"EEM: Unable to query for EEM capability\n");
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx\n");
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[KEY0_TABLE].num_entries,
+				   em_tables[KEY0_TABLE].ctx_id,
+				   em_tables[KEY1_TABLE].ctx_id,
+				   em_tables[RECORD_TABLE].ctx_id,
+				   em_tables[EFC_TABLE].ctx_id,
+				   dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"TBL: Unable to configure EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(session,
+						 dir,
+						 tbl_scope_cb,
+						 index,
+						 TF_EXT_POOL_ENTRY_CNT,
+						 TF_EXT_POOL_ENTRY_SZ_BYTES);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "%d TBL: Unable to allocate idx pools %s\n",
+				    dir,
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = index;
+	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+	return -EINVAL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	if (tfp == NULL || parms == NULL || parms->data == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		void *base_addr;
+		uint32_t offset = TF_ACT_REC_INDEX_2_OFFSET(parms->idx);
+		uint32_t tbl_scope_id;
+
+		session = (struct tf_session *)(tfp->session->core_data);
+
+		tbl_scope_id =
+			session->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+
+		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Table scope not allocated\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		/* Get the table scope control block associated with the
+		 * external pool
+		 */
+
+		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
+
+		if (tbl_scope_cb == NULL)
+			return -EINVAL;
+
+		/* External table, implicitly the Action table */
+		base_addr = tf_em_get_table_page(tbl_scope_cb,
+						 parms->dir,
+						 offset,
+						 RECORD_TABLE);
+		if (base_addr == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Base address lookup failed\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		offset %= TF_EM_PAGE_SIZE;
+		rte_memcpy((char *)base_addr + offset,
+			   parms->data,
+			   parms->data_sz_in_bytes);
+	} else {
+		/* Internal table type processing */
+		rc = tf_set_tbl_entry_internal(tfp, parms);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Set failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+		}
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, External table type not supported\n",
+			    parms->dir);
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_tbl_entry_internal(tfp, parms);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Get failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_eem_tbl_scope(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	/* free table scope and all associated resources */
+	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow copy support for external tables, allocate and return
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
+		/* Entry found and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow of external tables so just free the entry
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_free_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_free_tbl_entry_shadow(tfs, parms);
+		/* Entry free'ed and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
+
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 5a5e72f..cb7ce9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,6 +7,7 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+#include "stack.h"
 
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
@@ -15,6 +16,48 @@ enum tf_pg_tbl_lvl {
 	PT_LVL_MAX
 };
 
+enum tf_em_table_type {
+	KEY0_TABLE,
+	KEY1_TABLE,
+	RECORD_TABLE,
+	EFC_TABLE,
+	MAX_TABLE
+};
+
+struct tf_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct tf_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+};
+
+struct tf_em_ctx_mem_info {
+	struct tf_em_table		em_tables[MAX_TABLE];
+};
+
+/** table scope control block content */
+struct tf_em_caps {
+	uint32_t flags;
+	uint32_t supported;
+	uint32_t max_entries_supported;
+	uint16_t key_entry_size;
+	uint16_t record_entry_size;
+	uint16_t efc_entry_size;
+};
+
 /** Invalid table scope id */
 #define TF_TBL_SCOPE_INVALID 0xffffffff
 
@@ -27,9 +70,49 @@ enum tf_pg_tbl_lvl {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
+	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps          em_caps[TF_DIR_MAX];
+	struct stack               ext_pool[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
+ */
+#define PAGE_SHIFT 22 /** 2M */
+
+#if (PAGE_SHIFT < 12)				/** < 4K >> 4K */
+#define TF_EM_PAGE_SHIFT 12
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SHIFT <= 13)			/** 4K, 8K */
+#define TF_EM_PAGE_SHIFT 13
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
+#define TF_EM_PAGE_SHIFT 15
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
+#elif (PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
+#define TF_EM_PAGE_SHIFT 16
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
+#define TF_EM_PAGE_SHIFT 18
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SHIFT <= 21)			/** 1M */
+#define TF_EM_PAGE_SHIFT 20
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SHIFT <= 22)			/** 2M, 4M */
+#define TF_EM_PAGE_SHIFT 21
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
+#define TF_EM_PAGE_SHIFT 22
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#else						/** >= 1G >> 1G */
+#define TF_EM_PAGE_SHIFT	30
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /**
  * Initialize table pool structure to indicate
  * no table scope has been associated with the
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 13/34] net/bnxt: fetch SVIF information from the firmware
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (11 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
                     ` (22 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

SVIF (source virtual interface) is used to represent a physical port,
physical function, or a virtual function. SVIF is compared during L2
context and exact match lookups in TX direction. SVIF is masked for
port information during L2 context and exact match lookup in RX direction.
Hence, driver needs this SVIF information to program L2 context and Exact
match tables.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  6 ++++++
 drivers/net/bnxt/bnxt_ethdev.c | 14 ++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 34 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 55 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index a8e57ca..2ed56f4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -682,6 +682,9 @@ struct bnxt {
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
 
+#define	BNXT_SVIF_INVALID	0xFFFF
+	uint16_t		func_svif;
+	uint16_t		port_svif;
 	struct tf               tfp;
 };
 
@@ -723,4 +726,7 @@ extern int bnxt_logtype_driver;
 
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
+
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
+
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 93d0062..f3cc745 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4696,6 +4696,18 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 	ALLOW_FUNC(HWRM_VNIC_TPA_CFG);
 }
 
+uint16_t
+bnxt_get_svif(uint16_t port_id, bool func_svif)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	bp = eth_dev->data->dev_private;
+
+	return func_svif ? bp->func_svif : bp->port_svif;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
@@ -4731,6 +4743,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 	if (rc)
 		return rc;
 
+	bnxt_hwrm_port_mac_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index c8309ee..43b330f 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3010,6 +3010,8 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t flags;
 	int rc = 0;
+	bp->func_svif = BNXT_SVIF_INVALID;
+	uint16_t svif_info;
 
 	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
@@ -3020,6 +3022,12 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	/* Hard Coded.. 0xfff VLAN ID mask */
 	bp->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff;
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
+		bp->func_svif =	svif_info &
+				     HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
 	flags = rte_le_to_cpu_16(resp->flags);
 	if (BNXT_PF(bp) && (flags & HWRM_FUNC_QCFG_OUTPUT_FLAGS_MULTI_HOST))
 		bp->flags |= BNXT_FLAG_MULTI_HOST;
@@ -3056,6 +3064,32 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
+{
+	struct hwrm_port_mac_qcfg_input req = {0};
+	struct hwrm_port_mac_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t port_svif_info;
+	int rc;
+
+	bp->port_svif = BNXT_SVIF_INVALID;
+
+	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
+	if (port_svif_info &
+	    HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID)
+		bp->port_svif = port_svif_info &
+			HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static void copy_func_cfg_to_qcaps(struct hwrm_func_cfg_input *fcfg,
 				   struct hwrm_func_qcaps_output *qcaps)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index df7aa74..0079d8a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -193,6 +193,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp);
 int bnxt_hwrm_port_clr_stats(struct bnxt *bp);
 int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on);
 int bnxt_hwrm_port_led_qcaps(struct bnxt *bp);
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp);
 int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 					uint32_t flags);
 void vf_vnic_set_rxmask_cb(struct bnxt_vnic_info *vnic, void *flagp);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 14/34] net/bnxt: fetch vnic info from DPDK port
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (12 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
                     ` (21 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

VNIC is needed for the driver to program the action record for rx
flows. VNIC determines what receive rings to use to place the received
packets. This patch introduces a routine that will convert a given
dpdk port to VNIC.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 15 +++++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2ed56f4..c4507f7 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -727,6 +727,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f3cc745..57ed90f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4708,6 +4708,21 @@ bnxt_get_svif(uint16_t port_id, bool func_svif)
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
+uint16_t
+bnxt_get_vnic_id(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt_vnic_info *vnic;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	bp = eth_dev->data->dev_private;
+
+	vnic = BNXT_GET_DEFAULT_VNIC(bp);
+
+	return vnic->fw_vnic_id;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (13 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
                     ` (20 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This feature can be enabled by passing
"-w 0000:0d:00.0,host-based-truflow=1” to the DPDK application.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  4 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 73 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index c4507f7..cd84ebd 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -685,7 +685,9 @@ struct bnxt {
 #define	BNXT_SVIF_INVALID	0xFFFF
 	uint16_t		func_svif;
 	uint16_t		port_svif;
-	struct tf               tfp;
+
+	struct tf		tfp;
+	uint8_t			truflow;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 57ed90f..c4bbf1d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_malloc.h>
 #include <rte_cycles.h>
 #include <rte_alarm.h>
+#include <rte_kvargs.h>
 
 #include "bnxt.h"
 #include "bnxt_filter.h"
@@ -126,6 +127,18 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
+static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_TRUFLOW,
+	NULL
+};
+
+/*
+ * truflow == false to disable the feature
+ * truflow == true to enable the feature
+ */
+#define	BNXT_DEVARG_TRUFLOW_INVALID(truflow)	((truflow) > 1)
+
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
 static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -4854,6 +4867,63 @@ static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 }
 
 static int
+bnxt_parse_devarg_truflow(__rte_unused const char *key,
+			  const char *value, void *opaque_arg)
+{
+	struct bnxt *bp = opaque_arg;
+	unsigned long truflow;
+	char *end = NULL;
+
+	if (!value || !opaque_arg) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid parameter passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	truflow = strtoul(value, &end, 10);
+	if (end == NULL || *end != '\0' ||
+	    (truflow == ULONG_MAX && errno == ERANGE)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid parameter passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	if (BNXT_DEVARG_TRUFLOW_INVALID(truflow)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid value passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	bp->truflow = truflow;
+	if (bp->truflow)
+		PMD_DRV_LOG(INFO, "Host-based truflow feature enabled.\n");
+
+	return 0;
+}
+
+static void
+bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+
+	if (devargs == NULL)
+		return;
+
+	kvlist = rte_kvargs_parse(devargs->args, bnxt_dev_args);
+	if (kvlist == NULL)
+		return;
+
+	/*
+	 * Handler for "truflow" devarg.
+	 * Invoked as for ex: "-w 0000:00:0d.0,host-based-truflow=1”
+	 */
+	rte_kvargs_process(kvlist, BNXT_DEVARG_TRUFLOW,
+			   bnxt_parse_devarg_truflow, bp);
+
+	rte_kvargs_free(kvlist);
+}
+
+static int
 bnxt_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -4879,6 +4949,9 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 
 	bp = eth_dev->data->dev_private;
 
+	/* Parse dev arguments passed on when starting the DPDK application. */
+	bnxt_parse_dev_args(bp, pci_dev->device.devargs);
+
 	bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE;
 
 	if (bnxt_vf_pciid(pci_dev->id.device_id))
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 16/34] net/bnxt: add support for ULP session manager init
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (14 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
                     ` (19 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   6 +-
 drivers/net/bnxt/bnxt.h                       |   5 +
 drivers/net/bnxt/bnxt_ethdev.c                |   4 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 527 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            | 100 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 187 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  77 ++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  94 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h        |  49 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  28 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  35 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  40 ++
 13 files changed, 1186 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 4c95847..bb9b888 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -44,7 +44,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core -I$(SRCDIR)/tf_ulp
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
@@ -57,6 +57,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index cd84ebd..cd20740 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -22,6 +22,7 @@
 #include "bnxt_util.h"
 
 #include "tf_core.h"
+#include "bnxt_ulp.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -687,6 +688,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_ulp_context	ulp_ctx;
 	uint8_t			truflow;
 };
 
@@ -729,6 +731,9 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+int32_t bnxt_ulp_init(struct bnxt *bp);
+void bnxt_ulp_deinit(struct bnxt *bp);
+
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c4bbf1d..1703ce3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -904,6 +904,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 	pthread_mutex_lock(&bp->def_cp_lock);
 	bnxt_schedule_fw_health_check(bp);
 	pthread_mutex_unlock(&bp->def_cp_lock);
+
+	if (bp->truflow)
+		bnxt_ulp_init(bp);
+
 	return 0;
 
 error:
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
new file mode 100644
index 0000000..3516df4
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_TF_COMMON_H_
+#define _BNXT_TF_COMMON_H_
+
+#define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
+
+#define BNXT_ULP_EM_FLOWS			8192
+#define BNXT_ULP_1M_FLOWS			1000000
+#define BNXT_EEM_RX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_TX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_HASH_KEY2_USED			0x8000000
+#define BNXT_EEM_RX_HW_HASH_KEY2_BIT		BNXT_ULP_1M_FLOWS
+#define	BNXT_ULP_DFLT_RX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_RX_MEM			0
+#define	BNXT_ULP_RX_NUM_FLOWS			32
+#define	BNXT_ULP_RX_TBL_IF_ID			0
+#define	BNXT_ULP_DFLT_TX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_TX_MEM			0
+#define	BNXT_ULP_TX_NUM_FLOWS			32
+#define	BNXT_ULP_TX_TBL_IF_ID			0
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl);
+
+#endif /* _BNXT_TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
new file mode 100644
index 0000000..7afc6bf
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -0,0 +1,527 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
+
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_ext_flow_handle.h"
+
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+
+/* Linked list of all TF sessions. */
+STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
+			STAILQ_HEAD_INITIALIZER(bnxt_ulp_session_list);
+
+/* Mutex to synchronize bnxt_ulp_session_list operations. */
+static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/*
+ * Initialize an ULP session.
+ * An ULP session will contain all the resources needed to support rte flow
+ * offloads. A session is initialized as part of rte_eth_device start.
+ * A single vswitch instance can have multiple uplinks which means
+ * rte_eth_device start will be called for each of these devices.
+ * ULP session manager will make sure that a single ULP session is only
+ * initialized once. Apart from this, it also initializes MARK database,
+ * EEM table & flow database. ULP session manager also manages a list of
+ * all opened ULP sessions.
+ */
+static int32_t
+ulp_ctx_session_open(struct bnxt *bp,
+		     struct bnxt_ulp_session_state *session)
+{
+	struct rte_eth_dev		*ethdev = bp->eth_dev;
+	int32_t				rc = 0;
+	struct tf_open_session_parms	params;
+
+	memset(&params, 0, sizeof(params));
+
+	rc = rte_eth_dev_get_name_by_port(ethdev->data->port_id,
+					  params.ctrl_chan_name);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Invalid port %d, rc = %d\n",
+			    ethdev->data->port_id, rc);
+		return rc;
+	}
+
+	rc = tf_open_session(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
+			    params.ctrl_chan_name, rc);
+		return -EINVAL;
+	}
+	session->session_opened = 1;
+	session->g_tfp = &bp->tfp;
+	return rc;
+}
+
+static void
+bnxt_init_tbl_scope_parms(struct bnxt *bp,
+			  struct tf_alloc_tbl_scope_parms *params)
+{
+	struct bnxt_ulp_device_params	*dparms;
+	uint32_t dev_id;
+	int rc;
+
+	rc = bnxt_ulp_cntxt_dev_id_get(&bp->ulp_ctx, &dev_id);
+	if (rc)
+		/* TBD: For now, just use default. */
+		dparms = 0;
+	else
+		dparms = bnxt_ulp_device_params_get(dev_id);
+
+	if (!dparms) {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = BNXT_ULP_RX_NUM_FLOWS;
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = BNXT_ULP_TX_NUM_FLOWS;
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	} else {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = dparms->num_flows / (1024);
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = dparms->num_flows / (1024);
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	}
+}
+
+/* Initialize Extended Exact Match host memory. */
+static int32_t
+ulp_eem_tbl_scope_init(struct bnxt *bp)
+{
+	struct tf_alloc_tbl_scope_parms params = {0};
+	int rc;
+
+	bnxt_init_tbl_scope_parms(bp, &params);
+
+	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate eem table scope rc = %d\n",
+			    rc);
+		return rc;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_set(&bp->ulp_ctx, params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to set table scope id\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/* The function to free and deinit the ulp context data. */
+static int32_t
+ulp_ctx_deinit(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Free the contents */
+	if (session->cfg_data) {
+		rte_free(session->cfg_data);
+		bp->ulp_ctx.cfg_data = NULL;
+		session->cfg_data = NULL;
+	}
+	return 0;
+}
+
+/* The function to allocate and initialize the ulp context data. */
+static int32_t
+ulp_ctx_init(struct bnxt *bp,
+	     struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_data	*ulp_data;
+	int32_t			rc = 0;
+
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Allocate memory to hold ulp context data. */
+	ulp_data = rte_zmalloc("bnxt_ulp_data",
+			       sizeof(struct bnxt_ulp_data), 0);
+	if (!ulp_data) {
+		BNXT_TF_DBG(ERR, "Failed to allocate memory for ulp data\n");
+		return -ENOMEM;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	bp->ulp_ctx.cfg_data = ulp_data;
+	session->cfg_data = ulp_data;
+	ulp_data->ref_cnt++;
+
+	/* Open the ulp session. */
+	rc = ulp_ctx_session_open(bp, session);
+	if (rc) {
+		(void)ulp_ctx_deinit(bp, session);
+		return rc;
+	}
+	bnxt_ulp_cntxt_tfp_set(&bp->ulp_ctx, session->g_tfp);
+	return rc;
+}
+
+static int32_t
+ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!ulp_ctx || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	ulp_ctx->cfg_data = session->cfg_data;
+	ulp_ctx->cfg_data->ref_cnt++;
+
+	/* TBD call TF_session_attach. */
+	ulp_ctx->g_tfp = session->g_tfp;
+	return 0;
+}
+
+/*
+ * Initialize the state of an ULP session.
+ * If the state of an ULP session is not initialized, set it's state to
+ * initialized. If the state is already initialized, do nothing.
+ */
+static void
+ulp_context_initialized(struct bnxt_ulp_session_state *session, bool *init)
+{
+	pthread_mutex_lock(&session->bnxt_ulp_mutex);
+
+	if (!session->bnxt_ulp_init) {
+		session->bnxt_ulp_init = true;
+		*init = false;
+	} else {
+		*init = true;
+	}
+
+	pthread_mutex_unlock(&session->bnxt_ulp_mutex);
+}
+
+/*
+ * Check if an ULP session is already allocated for a specific PCI
+ * domain & bus. If it is already allocated simply return the session
+ * pointer, otherwise allocate a new session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_get_session(struct rte_pci_addr *pci_addr)
+{
+	struct bnxt_ulp_session_state *session;
+
+	STAILQ_FOREACH(session, &bnxt_ulp_session_list, next) {
+		if (session->pci_info.domain == pci_addr->domain &&
+		    session->pci_info.bus == pci_addr->bus) {
+			return session;
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Allocate and Initialize an ULP session and set it's state to INITIALIZED.
+ * If it's already initialized simply return the already existing session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_session_init(struct bnxt *bp,
+		 bool *init)
+{
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+	struct bnxt_ulp_session_state	*session;
+
+	if (!bp)
+		return NULL;
+
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+
+	session = ulp_get_session(pci_addr);
+	if (!session) {
+		/* Not Found the session  Allocate a new one */
+		session = rte_zmalloc("bnxt_ulp_session",
+				      sizeof(struct bnxt_ulp_session_state),
+				      0);
+		if (!session) {
+			BNXT_TF_DBG(ERR,
+				    "Allocation failed for bnxt_ulp_session\n");
+			pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+			return NULL;
+
+		} else {
+			/* Add it to the queue */
+			session->pci_info.domain = pci_addr->domain;
+			session->pci_info.bus = pci_addr->bus;
+			pthread_mutex_init(&session->bnxt_ulp_mutex, NULL);
+			STAILQ_INSERT_TAIL(&bnxt_ulp_session_list,
+					   session, next);
+		}
+	}
+	ulp_context_initialized(session, init);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	return session;
+}
+
+/*
+ * When a port is initialized by dpdk. This functions is called
+ * and this function initializes the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+int32_t
+bnxt_ulp_init(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state *session;
+	bool init;
+	int rc;
+
+	/*
+	 * Multiple uplink ports can be associated with a single vswitch.
+	 * Make sure only the port that is started first will initialize
+	 * the TF session.
+	 */
+	session = ulp_session_init(bp, &init);
+	if (!session) {
+		BNXT_TF_DBG(ERR, "Failed to initialize the tf session\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * If ULP is already initialized for a specific domain then simply
+	 * assign the ulp context to this rte_eth_dev.
+	 */
+	if (init) {
+		rc = ulp_ctx_attach(&bp->ulp_ctx, session);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Failed to attach the ulp context\n");
+		}
+		return rc;
+	}
+
+	/* Allocate and Initialize the ulp context. */
+	rc = ulp_ctx_init(bp, session);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the ulp context\n");
+		goto jump_to_error;
+	}
+
+	/* Create the Mark database. */
+	rc = ulp_mark_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the mark database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the flow database. */
+	rc = ulp_flow_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the flow database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the eem table scope. */
+	rc = ulp_eem_tbl_scope_init(bp);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the eem scope table\n");
+		goto jump_to_error;
+	}
+
+	return rc;
+
+jump_to_error:
+	return -ENOMEM;
+}
+
+/* Below are the access functions to access internal data of ulp context. */
+
+/* Function to set the Mark DB into the context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->mark_tbl = mark_tbl;
+
+	return 0;
+}
+
+/* Function to retrieve the Mark DB from the context. */
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->mark_tbl;
+}
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->dev_id = dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t *dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*dev_id = ulp_ctx->cfg_data->dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*tbl_scope_id = ulp_ctx->cfg_data->tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->tbl_scope_id = tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the tfp session details from the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return -EINVAL;
+	}
+
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	ulp->g_tfp = tfp;
+	return 0;
+}
+
+/* Function to get the tfp session details from the ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	return ulp->g_tfp;
+}
+
+/*
+ * Get the device table entry based on the device id.
+ *
+ * dev_id [in] The device id of the hardware
+ *
+ * Returns the pointer to the device parameters.
+ */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id)
+{
+	if (dev_id < BNXT_ULP_MAX_NUM_DEVICES)
+		return &ulp_device_params[dev_id];
+	return NULL;
+}
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->flow_db = flow_db;
+	return 0;
+}
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return NULL;
+	}
+
+	return ulp_ctx->cfg_data->flow_db;
+}
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev	*dev)
+{
+	struct bnxt	*bp;
+
+	bp = (struct bnxt *)dev->data->dev_private;
+	if (!bp) {
+		BNXT_TF_DBG(ERR, "Bnxt private data is not initialized\n");
+		return NULL;
+	}
+	return &bp->ulp_ctx;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
new file mode 100644
index 0000000..d88225f
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_ULP_H_
+#define _BNXT_ULP_H_
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include "rte_ethdev.h"
+
+struct bnxt_ulp_data {
+	uint32_t			tbl_scope_id;
+	struct bnxt_ulp_mark_tbl	*mark_tbl;
+	uint32_t			dev_id; /* Hardware device id */
+	uint32_t			ref_cnt;
+	struct bnxt_ulp_flow_db		*flow_db;
+};
+
+struct bnxt_ulp_context {
+	struct bnxt_ulp_data	*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf		*g_tfp;
+};
+
+struct bnxt_ulp_pci_info {
+	uint32_t	domain;
+	uint8_t		bus;
+};
+
+struct bnxt_ulp_session_state {
+	STAILQ_ENTRY(bnxt_ulp_session_state)	next;
+	bool					bnxt_ulp_init;
+	pthread_mutex_t				bnxt_ulp_mutex;
+	struct bnxt_ulp_pci_info		pci_info;
+	struct bnxt_ulp_data			*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf				*g_tfp;
+	uint32_t				session_opened;
+};
+
+/* ULP flow id structure */
+struct rte_tf_flow {
+	uint32_t	flow_id;
+};
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx, uint32_t *dev_id);
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id);
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id);
+
+/* Function to set the tfp session details in the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp);
+
+/* Function to get the tfp session details from ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp);
+
+/* Get the device table entry based on the device id. */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id);
+
+int32_t
+bnxt_ulp_ctxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+			       struct bnxt_ulp_mark_tbl *mark_tbl);
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_ctxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db);
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx);
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev *dev);
+
+#endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
new file mode 100644
index 0000000..3dd39c1
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_malloc.h>
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Helper function to allocate the flow table and initialize
+ * the stack for allocation operations.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+static int32_t
+ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			   enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	uint32_t			idx = 0;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	uint32_t			size;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	size = sizeof(struct ulp_fdb_resource_info) * flow_tbl->num_resources;
+	flow_tbl->flow_resources =
+			rte_zmalloc("ulp_fdb_resource_info", size, 0);
+
+	if (!flow_tbl->flow_resources) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory for flow table\n");
+		return -ENOMEM;
+	}
+	size = sizeof(uint32_t) * flow_tbl->num_resources;
+	flow_tbl->flow_tbl_stack = rte_zmalloc("flow_tbl_stack", size, 0);
+	if (!flow_tbl->flow_tbl_stack) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory flow tbl stack\n");
+		return -ENOMEM;
+	}
+	size = (flow_tbl->num_flows / sizeof(uint64_t)) + 1;
+	flow_tbl->active_flow_tbl = rte_zmalloc("active flow tbl", size, 0);
+	if (!flow_tbl->active_flow_tbl) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory active tbl\n");
+		return -ENOMEM;
+	}
+
+	/* Initialize the stack table. */
+	for (idx = 0; idx < flow_tbl->num_resources; idx++)
+		flow_tbl->flow_tbl_stack[idx] = idx;
+
+	/* Ignore the first element in the list. */
+	flow_tbl->head_index = 1;
+	/* Tail points to the last entry in the list. */
+	flow_tbl->tail_index = flow_tbl->num_resources - 1;
+	return 0;
+}
+
+/*
+ * Helper function to de allocate the flow table.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns none.
+ */
+static void
+ulp_flow_db_dealloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			     enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* Free all the allocated tables in the flow table. */
+	if (flow_tbl->active_flow_tbl) {
+		rte_free(flow_tbl->active_flow_tbl);
+		flow_tbl->active_flow_tbl = NULL;
+	}
+
+	if (flow_tbl->flow_tbl_stack) {
+		rte_free(flow_tbl->flow_tbl_stack);
+		flow_tbl->flow_tbl_stack = NULL;
+	}
+
+	if (flow_tbl->flow_resources) {
+		rte_free(flow_tbl->flow_resources);
+		flow_tbl->flow_resources = NULL;
+	}
+}
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_device_params		*dparms;
+	struct bnxt_ulp_flow_tbl		*flow_tbl;
+	struct bnxt_ulp_flow_db			*flow_db;
+	uint32_t				dev_id;
+
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctxt, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	flow_db = rte_zmalloc("bnxt_ulp_flow_db",
+			      sizeof(struct bnxt_ulp_flow_db), 0);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate memory for flow table ptr\n");
+		goto error_free;
+	}
+
+	/* Attach the flow database to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, flow_db);
+
+	/* Populate the regular flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_REGULAR_FLOW_TABLE];
+	flow_tbl->num_flows = dparms->num_flows + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   dparms->num_resources_per_flow);
+
+	/* Populate the default flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_DEFAULT_FLOW_TABLE];
+	flow_tbl->num_flows = BNXT_FLOW_DB_DEFAULT_NUM_FLOWS + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES);
+
+	/* Allocate the resource for the regular flow table. */
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE))
+		goto error_free;
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE))
+		goto error_free;
+
+	/* All good so return. */
+	return 0;
+error_free:
+	ulp_flow_db_deinit(ulp_ctxt);
+	return -ENOMEM;
+}
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_flow_db			*flow_db;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Detach the flow database from the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, NULL);
+
+	/* Free up all the memory. */
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE);
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE);
+	rte_free(flow_db);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
new file mode 100644
index 0000000..a2ee8fa
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FLOW_DB_H_
+#define _ULP_FLOW_DB_H_
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db.h"
+
+#define BNXT_FLOW_DB_DEFAULT_NUM_FLOWS		128
+#define BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES	5
+
+/* Structure for the flow database resource information. */
+struct ulp_fdb_resource_info {
+	/* Points to next resource in the chained list. */
+	uint32_t	nxt_resource_idx;
+	union {
+		uint64_t	resource_em_handle;
+		struct {
+			uint32_t	resource_type;
+			uint32_t	resource_hndl;
+		};
+	};
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_tbl {
+	/* Flow tbl is the resource object list for each flow id. */
+	struct ulp_fdb_resource_info	*flow_resources;
+
+	/* Flow table stack to track free list of resources. */
+	uint32_t	*flow_tbl_stack;
+	uint32_t	head_index;
+	uint32_t	tail_index;
+
+	/* Table to track the active flows. */
+	uint64_t	*active_flow_tbl;
+	uint32_t	num_flows;
+	uint32_t	num_resources;
+};
+
+/* Flow database supports two tables. */
+enum bnxt_ulp_flow_db_tables {
+	BNXT_ULP_REGULAR_FLOW_TABLE,
+	BNXT_ULP_DEFAULT_FLOW_TABLE,
+	BNXT_ULP_FLOW_TABLE_MAX
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_db {
+	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
+};
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
+
+#endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
new file mode 100644
index 0000000..3f28a73
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include "bnxt_ulp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "bnxt_tf_common.h"
+#include "../bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ *
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	struct bnxt_ulp_mark_tbl *mark_tbl = NULL;
+	uint32_t dev_id;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	mark_tbl = rte_zmalloc("ulp_rx_mark_tbl_ptr",
+			       sizeof(struct bnxt_ulp_mark_tbl), 0);
+	if (!mark_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->lfid_tbl = rte_zmalloc("ulp_rx_em_flow_mark_table",
+					 dparms->lfid_entries *
+					    sizeof(struct bnxt_lfid_mark_info),
+					 0);
+
+	if (!mark_tbl->lfid_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
+					 2 * dparms->num_flows *
+					    sizeof(struct bnxt_gfid_mark_info),
+					 0);
+	if (!mark_tbl->gfid_tbl)
+		goto mem_error;
+
+	/*
+	 * TBD: This needs to be generalized for better mark handling
+	 * These values are used to compress the FID to the allowable index
+	 * space.  The FID from hw may be the full hash.
+	 */
+	mark_tbl->gfid_max	= dparms->gfid_entries - 1;
+	mark_tbl->gfid_mask	= (dparms->gfid_entries / 2) - 1;
+	mark_tbl->gfid_type_bit = (dparms->gfid_entries / 2);
+
+	BNXT_TF_DBG(DEBUG, "GFID Max = 0x%08x\nGFID MASK = 0x%08x\n",
+		    mark_tbl->gfid_max,
+		    mark_tbl->gfid_mask);
+
+	/* Add the mart tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
+
+	return 0;
+
+mem_error:
+	rte_free(mark_tbl->gfid_tbl);
+	rte_free(mark_tbl->lfid_tbl);
+	rte_free(mark_tbl);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for mark mgr\n");
+
+	return -ENOMEM;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
new file mode 100644
index 0000000..b175abd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MARK_MGR_H_
+#define _ULP_MARK_MGR_H_
+
+#include "bnxt_ulp.h"
+
+#define ULP_MARK_INVALID (0)
+struct bnxt_lfid_mark_info {
+	uint16_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_gfid_mark_info {
+	uint32_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_ulp_mark_tbl {
+	struct bnxt_lfid_mark_info	*lfid_tbl;
+	struct bnxt_gfid_mark_info	*gfid_tbl;
+	uint32_t			gfid_mask;
+	uint32_t			gfid_type_bit;
+	uint32_t			gfid_max;
+};
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * Initialize MARK database for GFID & LFID tables
+ * GFID: Global flow id which is based on EEM hash id.
+ * LFID: Local flow id which is the CFA action pointer.
+ * GFID is used for EEM flows, LFID is used for EM flows.
+ *
+ * Flow mapper modules adds mark_id in the MARK database.
+ *
+ * BNXT PMD receive handler extracts the hardware flow id from the
+ * received completion record. Fetches mark_id from the MARK
+ * database using the flow id. Injects mark_id into the packet's mbuf.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
new file mode 100644
index 0000000..bc0ffd3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#include "ulp_template_db.h"
+#include "ulp_template_field_db.h"
+#include "ulp_template_struct.h"
+
+struct bnxt_ulp_device_params ulp_device_params[] = {
+	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
+		.global_fid_enable       = BNXT_ULP_SYM_YES,
+		.byte_order              = BNXT_ULP_SYM_LITTLE_ENDIAN,
+		.encap_byte_swap         = 1,
+		.lfid_entries            = 16384,
+		.lfid_entry_size         = 4,
+		.gfid_entries            = 65536,
+		.gfid_entry_size         = 4,
+		.num_flows               = 32768,
+		.num_resources_per_flow  = 8
+	}
+};
+
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
new file mode 100644
index 0000000..ba2a101
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#ifndef ULP_TEMPLATE_DB_H_
+#define ULP_TEMPLATE_DB_H_
+
+#define BNXT_ULP_MAX_NUM_DEVICES 4
+
+enum bnxt_ulp_byte_order {
+	BNXT_ULP_BYTE_ORDER_BE,
+	BNXT_ULP_BYTE_ORDER_LE,
+	BNXT_ULP_BYTE_ORDER_LAST
+};
+
+enum bnxt_ulp_device_id {
+	BNXT_ULP_DEVICE_ID_WH_PLUS,
+	BNXT_ULP_DEVICE_ID_THOR,
+	BNXT_ULP_DEVICE_ID_STINGRAY,
+	BNXT_ULP_DEVICE_ID_STINGRAY2,
+	BNXT_ULP_DEVICE_ID_LAST
+};
+
+enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_YES = 1
+};
+
+#endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
new file mode 100644
index 0000000..4b9d0b2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_TEMPLATE_STRUCT_H_
+#define _ULP_TEMPLATE_STRUCT_H_
+
+#include <stdint.h>
+#include "rte_ether.h"
+#include "rte_icmp.h"
+#include "rte_ip.h"
+#include "rte_tcp.h"
+#include "rte_udp.h"
+#include "rte_esp.h"
+#include "rte_sctp.h"
+#include "rte_flow.h"
+#include "tf_core.h"
+
+/* Device specific parameters. */
+struct bnxt_ulp_device_params {
+	uint8_t				description[16];
+	uint32_t			global_fid_enable;
+	enum bnxt_ulp_byte_order	byte_order;
+	uint8_t				encap_byte_swap;
+	uint32_t			lfid_entries;
+	uint32_t			lfid_entry_size;
+	uint64_t			gfid_entries;
+	uint32_t			gfid_entry_size;
+	uint64_t			num_flows;
+	uint32_t			num_resources_per_flow;
+};
+
+/*
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
+ */
+extern struct bnxt_ulp_device_params ulp_device_params[];
+
+#endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 17/34] net/bnxt: add support for ULP session manager cleanup
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (15 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
                     ` (18 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

This patch adds support for cleaning up resources initialized for ULP
sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c         |   3 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c     | 167 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h     |  10 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  25 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |   8 ++
 5 files changed, 212 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1703ce3..2f08921 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -951,6 +951,9 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	if (bp->truflow)
+		bnxt_ulp_deinit(bp);
+
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 7afc6bf..3795c6d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -28,6 +28,27 @@ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
 static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 /*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *ptr)
+{
+	struct bnxt *bp = (struct bnxt *)ptr;
+
+	if (!bp)
+		return 0;
+
+	if (&bp->tfp == bp->ulp_ctx.g_tfp)
+		return 1;
+
+	return 0;
+}
+
+/*
  * Initialize an ULP session.
  * An ULP session will contain all the resources needed to support rte flow
  * offloads. A session is initialized as part of rte_eth_device start.
@@ -67,6 +88,22 @@ ulp_ctx_session_open(struct bnxt *bp,
 	return rc;
 }
 
+/*
+ * Close the ULP session.
+ * It takes the ulp context pointer.
+ */
+static void
+ulp_ctx_session_close(struct bnxt *bp,
+		      struct bnxt_ulp_session_state *session)
+{
+	/* close the session in the hardware */
+	if (session->session_opened)
+		tf_close_session(&bp->tfp);
+	session->session_opened = 0;
+	session->g_tfp = NULL;
+	bp->ulp_ctx.g_tfp = NULL;
+}
+
 static void
 bnxt_init_tbl_scope_parms(struct bnxt *bp,
 			  struct tf_alloc_tbl_scope_parms *params)
@@ -138,6 +175,41 @@ ulp_eem_tbl_scope_init(struct bnxt *bp)
 	return 0;
 }
 
+/* Free Extended Exact Match host memory */
+static int32_t
+ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
+{
+	struct tf_free_tbl_scope_parms	params = {0};
+	struct tf			*tfp;
+	int32_t				rc = 0;
+
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return -EINVAL;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return -EINVAL;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
+		return -EINVAL;
+	}
+
+	rc = tf_free_tbl_scope(tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to free table scope\n");
+		return -EINVAL;
+	}
+	return rc;
+}
+
 /* The function to free and deinit the ulp context data. */
 static int32_t
 ulp_ctx_deinit(struct bnxt *bp,
@@ -148,6 +220,9 @@ ulp_ctx_deinit(struct bnxt *bp,
 		return -EINVAL;
 	}
 
+	/* close the tf session */
+	ulp_ctx_session_close(bp, session);
+
 	/* Free the contents */
 	if (session->cfg_data) {
 		rte_free(session->cfg_data);
@@ -211,6 +286,36 @@ ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
 	return 0;
 }
 
+static int32_t
+ulp_ctx_detach(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+
+	if (!bp || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	ulp_ctx = &bp->ulp_ctx;
+
+	if (!ulp_ctx->cfg_data)
+		return 0;
+
+	/* TBD call TF_session_detach */
+
+	/* Increment the ulp context data reference count usage. */
+	if (ulp_ctx->cfg_data->ref_cnt >= 1) {
+		ulp_ctx->cfg_data->ref_cnt--;
+		if (ulp_ctx_deinit_allowed(bp))
+			ulp_ctx_deinit(bp, session);
+		ulp_ctx->cfg_data = NULL;
+		ulp_ctx->g_tfp = NULL;
+		return 0;
+	}
+	BNXT_TF_DBG(ERR, "context deatach on invalid data\n");
+	return 0;
+}
+
 /*
  * Initialize the state of an ULP session.
  * If the state of an ULP session is not initialized, set it's state to
@@ -297,6 +402,26 @@ ulp_session_init(struct bnxt *bp,
 }
 
 /*
+ * When a device is closed, remove it's associated session from the global
+ * session list.
+ */
+static void
+ulp_session_deinit(struct bnxt_ulp_session_state *session)
+{
+	if (!session)
+		return;
+
+	if (!session->cfg_data) {
+		pthread_mutex_lock(&bnxt_ulp_global_mutex);
+		STAILQ_REMOVE(&bnxt_ulp_session_list, session,
+			      bnxt_ulp_session_state, next);
+		pthread_mutex_destroy(&session->bnxt_ulp_mutex);
+		rte_free(session);
+		pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	}
+}
+
+/*
  * When a port is initialized by dpdk. This functions is called
  * and this function initializes the ULP context and rest of the
  * infrastructure associated with it.
@@ -363,12 +488,52 @@ bnxt_ulp_init(struct bnxt *bp)
 	return rc;
 
 jump_to_error:
+	bnxt_ulp_deinit(bp);
 	return -ENOMEM;
 }
 
 /* Below are the access functions to access internal data of ulp context. */
 
-/* Function to set the Mark DB into the context. */
+/*
+ * When a port is deinit'ed by dpdk. This function is called
+ * and this function clears the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+void
+bnxt_ulp_deinit(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state	*session;
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+
+	/* Get the session first */
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+	session = ulp_get_session(pci_addr);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+
+	/* session not found then just exit */
+	if (!session)
+		return;
+
+	/* cleanup the eem table scope */
+	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
+
+	/* cleanup the flow database */
+	ulp_flow_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the Mark database */
+	ulp_mark_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the ulp context and tf session */
+	ulp_ctx_detach(bp, session);
+
+	/* Finally delete the bnxt session*/
+	ulp_session_deinit(session);
+}
+
+/* Function to set the Mark DB into the context */
 int32_t
 bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
 				struct bnxt_ulp_mark_tbl *mark_tbl)
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index d88225f..b3e9e96 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -47,6 +47,16 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+/*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *bp);
+
 /* Function to set the device id of the hardware. */
 int32_t
 bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 3f28a73..9e4307e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -92,3 +92,28 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	return -ENOMEM;
 }
+
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+
+	if (mtbl) {
+		rte_free(mtbl->gfid_tbl);
+		rte_free(mtbl->lfid_tbl);
+		rte_free(mtbl);
+
+		/* Safe to ignore on deinit */
+		(void)bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, NULL);
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index b175abd..5948683 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -46,4 +46,12 @@ struct bnxt_ulp_mark_tbl {
 int32_t
 ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 18/34] net/bnxt: add helper functions for blob/regfile ops
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (16 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
                     ` (17 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

1. blob routines for managing key/mask/result data
2. regfile routines for managing temporal data during flow
   construction

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   2 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.h |  12 +
 drivers/net/bnxt/tf_ulp/ulp_utils.c       | 521 ++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h       | 279 ++++++++++++++++
 4 files changed, 814 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index bb9b888..4e0dea1 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -61,6 +61,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index ba2a101..1eed828 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -27,6 +27,18 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST
 };
 
+enum bnxt_ulp_fmf_mask {
+	BNXT_ULP_FMF_MASK_IGNORE,
+	BNXT_ULP_FMF_MASK_ANY,
+	BNXT_ULP_FMF_MASK_EXACT,
+	BNXT_ULP_FMF_MASK_WILDCARD,
+	BNXT_ULP_FMF_MASK_LAST
+};
+
+enum bnxt_ulp_regfile_index {
+	BNXT_ULP_REGFILE_INDEX_LAST
+};
+
 enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
new file mode 100644
index 0000000..0d38cf3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -0,0 +1,521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_utils.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile)
+{
+	/* validate the arguments */
+	if (!regfile) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memset(regfile, 0, sizeof(struct ulp_regfile));
+	return 1; /* Success */
+}
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance. Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * data [in/out]
+ *
+ * returns size, zero on failure
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	*data = regfile->entry[field].data;
+	return sizeof(*data);
+}
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * size [in] The size in bytes of the value beingritten into this
+ * variable.
+ *
+ * returns 0 on fail
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST || field < 0) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	regfile->entry[field].data = data;
+	return sizeof(data); /* Success */
+}
+
+static void
+ulp_bs_put_msb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	int8_t shift;
+
+	tmp = bs[index];
+	mask = ((uint8_t)-1 >> (8 - bitlen));
+	shift = 8 - bitoffs - bitlen;
+	val &= mask;
+
+	if (shift >= 0) {
+		tmp &= ~(mask << shift);
+		tmp |= val << shift;
+		bs[index] = tmp;
+	} else {
+		tmp &= ~((uint8_t)-1 >> bitoffs);
+		tmp |= val >> -shift;
+		bs[index++] = tmp;
+
+		tmp = bs[index];
+		tmp &= ((uint8_t)-1 >> (bitlen - (8 - bitoffs)));
+		tmp |= val << (8 + shift);
+		bs[index] = tmp;
+	}
+}
+
+static void
+ulp_bs_put_lsb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	uint8_t shift;
+	uint8_t partial;
+
+	tmp = bs[index];
+	shift = bitoffs;
+
+	if (bitoffs + bitlen <= 8) {
+		mask = ((1 << bitlen) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index] = tmp;
+	} else {
+		partial = 8 - bitoffs;
+		mask = ((1 << partial) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index++] = tmp;
+
+		val >>= partial;
+		partial = bitlen - partial;
+		mask = ((1 << partial) - 1);
+		tmp = bs[index];
+		tmp &= ~mask;
+		tmp |= (val & mask);
+		bs[index] = tmp;
+	}
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_lsb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len) / 8;
+	int tlen = len;
+
+	if (cnt > 0 && !(len % 8))
+		cnt -= 1;
+
+	for (i = 0; i < cnt; i++) {
+		ulp_bs_put_lsb(bs, pos, 8, val[cnt - i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	/* Handle the remainder bits */
+	if (tlen)
+		ulp_bs_put_lsb(bs, pos, tlen, val[0]);
+	return len;
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_msb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len + 7) / 8;
+	int tlen = len;
+
+	/* Handle any remainder bits */
+	int tmp = len % 8;
+
+	if (!tmp)
+		tmp = 8;
+
+	ulp_bs_put_msb(bs, pos, tmp, val[0]);
+
+	pos += tmp;
+	tlen -= tmp;
+
+	for (i = 1; i < cnt; i++) {
+		ulp_bs_put_msb(bs, pos, 8, val[i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	return len;
+}
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order)
+{
+	/* validate the arguments */
+	if (!blob || bitlen > (8 * sizeof(blob->data))) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	blob->bitlen = bitlen;
+	blob->byte_order = order;
+	blob->write_idx = 0;
+	memset(blob->data, 0, sizeof(blob->data));
+	return 1; /* Success */
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+#define ULP_BLOB_BYTE		8
+#define ULP_BLOB_BYTE_HEX	0xFF
+#define BLOB_MASK_CAL(x)	((0xFF << (x)) & 0xFF)
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen)
+{
+	uint32_t rc;
+
+	/* validate the arguments */
+	if (!blob || datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	if (blob->byte_order == BNXT_ULP_BYTE_ORDER_BE)
+		rc = ulp_bs_push_msb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	else
+		rc = ulp_bs_push_lsb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Failed ro write blob\n");
+		return 0;
+	}
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen)
+{
+	uint8_t *val = (uint8_t *)data;
+	int rc;
+
+	int size = (datalen + 7) / 8;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	rc = ulp_blob_push(blob, &val[8 - size], datalen);
+	if (!rc)
+		return 0;
+
+	return &val[8 - size];
+}
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen)
+{
+	uint8_t		*val = (uint8_t *)data;
+	uint32_t	initial_size, write_size = datalen;
+	uint32_t	size = 0;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	initial_size = ULP_BYTE_2_BITS(sizeof(uint64_t)) -
+	    (blob->write_idx % ULP_BYTE_2_BITS(sizeof(uint64_t)));
+	while (write_size > 0) {
+		if (initial_size && write_size > initial_size) {
+			size = initial_size;
+			initial_size = 0;
+		} else if (initial_size && write_size <= initial_size) {
+			size = write_size;
+			initial_size = 0;
+		} else if (write_size > ULP_BYTE_2_BITS(sizeof(uint64_t))) {
+			size = ULP_BYTE_2_BITS(sizeof(uint64_t));
+		} else {
+			size = write_size;
+		}
+		if (!ulp_blob_push(blob, val, size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return 0;
+		}
+		val += ULP_BITS_2_BYTE(size);
+		write_size -= size;
+	}
+	return datalen;
+}
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen)
+{
+	if (datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+		return 0;
+	}
+
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return NULL; /* failure */
+	}
+	*datalen = blob->write_idx;
+	return blob->data;
+}
+
+/*
+ * Set the encap swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	blob->encap_swap_idx = blob->write_idx;
+}
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob)
+{
+	uint32_t		i, idx = 0, end_idx = 0;
+	uint8_t		temp_val_1, temp_val_2;
+
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
+
+	while (idx <= end_idx) {
+		for (i = 0; i < 4; i = i + 2) {
+			temp_val_1 = blob->data[idx + i];
+			temp_val_2 = blob->data[idx + i + 1];
+			blob->data[idx + i] = blob->data[idx + 6 - i];
+			blob->data[idx + i + 1] = blob->data[idx + 7 - i];
+			blob->data[idx + 7 - i] = temp_val_2;
+			blob->data[idx + 6 - i] = temp_val_1;
+		}
+		idx += 8;
+	}
+}
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bytes [in] The number of bytes to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bytes)
+{
+	/* validate the arguments */
+	if (!operand || !val) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memcpy(val, operand, bytes);
+	return bytes;
+}
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size)
+{
+	uint16_t	idx = 0;
+
+	/* copy 2 bytes at a time. Write MSB to LSB */
+	while ((idx + sizeof(uint16_t)) <= size) {
+		memcpy(&dst[idx], &src[size - idx - sizeof(uint16_t)],
+		       sizeof(uint16_t));
+		idx += sizeof(uint16_t);
+	}
+}
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ *
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size)
+{
+	return buf[0] == 0 && !memcmp(buf, buf + 1, size - 1);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.h b/drivers/net/bnxt/tf_ulp/ulp_utils.h
new file mode 100644
index 0000000..db88546
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_UTILS_H_
+#define _ULP_UTILS_H_
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are power of 2.
+ */
+#define ULP_BITMAP_SET(bitmap, val)	((bitmap) |= (val))
+#define ULP_BITMAP_RESET(bitmap, val)	((bitmap) &= ~(val))
+#define ULP_BITMAP_ISSET(bitmap, val)	((bitmap) & (val))
+#define ULP_BITSET_CMP(b1, b2)  memcmp(&(b1)->bits, \
+				&(b2)->bits, sizeof((b1)->bits))
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are not power of 2 and
+ * are simple index values.
+ */
+#define ULP_INDEX_BITMAP_SIZE	(sizeof(uint64_t) * 8)
+#define ULP_INDEX_BITMAP_CSET(i)	(1UL << \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))
+
+#define ULP_INDEX_BITMAP_SET(b, i)	((b) |= \
+			(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))))
+
+#define ULP_INDEX_BITMAP_RESET(b, i)	((b) &= \
+			(~(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))))
+
+#define ULP_INDEX_BITMAP_GET(b, i)		(((b) >> \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))) & 1)
+
+#define ULP_DEVICE_PARAMS_INDEX(tid, dev_id)	\
+	(((tid) << BNXT_ULP_LOG2_MAX_NUM_DEV) | (dev_id))
+
+/* Macro to convert bytes to bits */
+#define ULP_BYTE_2_BITS(byte_x)		((byte_x) * 8)
+/* Macro to convert bits to bytes */
+#define ULP_BITS_2_BYTE(bits_x)		(((bits_x) + 7) / 8)
+/* Macro to convert bits to bytes with no round off*/
+#define ULP_BITS_2_BYTE_NR(bits_x)	((bits_x) / 8)
+
+/*
+ * Making the blob statically sized to 128 bytes for now.
+ * The blob must be initialized with ulp_blob_init prior to using.
+ */
+#define BNXT_ULP_FLMP_BLOB_SIZE	(128)
+#define BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS	ULP_BYTE_2_BITS(BNXT_ULP_FLMP_BLOB_SIZE)
+struct ulp_blob {
+	enum bnxt_ulp_byte_order	byte_order;
+	uint16_t			write_idx;
+	uint16_t			bitlen;
+	uint8_t				data[BNXT_ULP_FLMP_BLOB_SIZE];
+	uint16_t			encap_swap_idx;
+};
+
+/*
+ * The data can likely be only 32 bits for now.  Just size check
+ * the data when being written.
+ */
+#define ULP_REGFILE_ENTRY_SIZE	(sizeof(uint32_t))
+struct ulp_regfile_entry {
+	uint64_t	data;
+	uint32_t	size;
+};
+
+struct ulp_regfile {
+	struct ulp_regfile_entry entry[BNXT_ULP_REGFILE_INDEX_LAST];
+};
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile);
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * returns the byte array
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data);
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * returns zero on error
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data);
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, ptr to pushed data otherwise
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen);
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen);
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen);
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen);
+
+/*
+ * Set the 64 bit swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob);
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob);
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bitlen [in] The number of bits to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bitlen);
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size);
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size);
+
+#endif /* _ULP_UTILS_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 19/34] net/bnxt: add support to process action tables
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (17 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
                     ` (16 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch processes the action template. Iterates through the list
of action info templates and processes it.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 136 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  25 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 364 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  39 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 243 +++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     | 104 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  48 +++-
 8 files changed, 957 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 4e0dea1..f464d9e 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -62,6 +62,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 3dd39c1..0e7b433 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -7,7 +7,68 @@
 #include "bnxt.h"
 #include "bnxt_tf_common.h"
 #include "ulp_flow_db.h"
+#include "ulp_utils.h"
 #include "ulp_template_struct.h"
+#include "ulp_mapper.h"
+
+#define ULP_FLOW_DB_RES_DIR_BIT		31
+#define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
+#define ULP_FLOW_DB_RES_FUNC_BITS	28
+#define ULP_FLOW_DB_RES_FUNC_MASK	0x70000000
+#define ULP_FLOW_DB_RES_NXT_MASK	0x0FFFFFFF
+
+/* Macro to copy the nxt_resource_idx */
+#define ULP_FLOW_DB_RES_NXT_SET(dst, src)	{(dst) |= ((src) &\
+					 ULP_FLOW_DB_RES_NXT_MASK); }
+#define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
+
+/*
+ * Helper function to allocate the flow table and initialize
+ *  is set.No validation being done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ *
+ * returns 1 on set or 0 if not set.
+ */
+static int32_t
+ulp_flow_db_active_flow_is_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			       uint32_t			idx)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	return ULP_INDEX_BITMAP_GET(flow_tbl->active_flow_tbl[active_index],
+				    idx);
+}
+
+/*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [out] Ptr to resource information
+ * params [in] The input params from the caller
+ * returns none
+ */
+static void
+ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	resource_info->nxt_resource_idx |= ((params->direction <<
+				      ULP_FLOW_DB_RES_DIR_BIT) &
+				     ULP_FLOW_DB_RES_DIR_MASK);
+	resource_info->nxt_resource_idx |= ((params->resource_func <<
+					     ULP_FLOW_DB_RES_FUNC_BITS) &
+					    ULP_FLOW_DB_RES_FUNC_MASK);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
+		resource_info->resource_type = params->resource_type;
+
+	} else {
+		resource_info->resource_em_handle = params->resource_hndl;
+	}
+}
 
 /*
  * Helper function to allocate the flow table and initialize
@@ -185,3 +246,78 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 
 	return 0;
 }
+
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*resource, *fid_resource;
+	uint32_t			idx;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx < 0 || tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	/* check for max resource */
+	if ((flow_tbl->num_flows + 1) >= flow_tbl->tail_index) {
+		BNXT_TF_DBG(ERR, "Flow db has reached max resources\n");
+		return -ENOMEM;
+	}
+	fid_resource = &flow_tbl->flow_resources[fid];
+
+	if (!params->critical_resource) {
+		/* Not the critical_resource so allocate a resource */
+		idx = flow_tbl->flow_tbl_stack[flow_tbl->tail_index];
+		resource = &flow_tbl->flow_resources[idx];
+		flow_tbl->tail_index--;
+
+		/* Update the chain list of resource*/
+		ULP_FLOW_DB_RES_NXT_SET(resource->nxt_resource_idx,
+					fid_resource->nxt_resource_idx);
+		/* update the contents */
+		ulp_flow_db_res_params_to_info(resource, params);
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					idx);
+	} else {
+		/* critical resource. Just update the fid resource */
+		ulp_flow_db_res_params_to_info(fid_resource, params);
+	}
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index a2ee8fa..f6055a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -53,6 +53,15 @@ struct bnxt_ulp_flow_db {
 	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
 };
 
+/* flow db resource params to add resources */
+struct ulp_flow_db_res_params {
+	enum tf_dir			direction;
+	enum bnxt_ulp_resource_func	resource_func;
+	uint64_t			resource_hndl;
+	uint32_t			resource_type;
+	uint32_t			critical_resource;
+};
+
 /*
  * Initialize the flow database. Memory is allocated in this
  * call and assigned to the flow database.
@@ -74,4 +83,20 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
  */
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
new file mode 100644
index 0000000..9cfc382
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_utils.h"
+#include "bnxt_ulp.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
+/*
+ * Get the size of the action property for a given index.
+ *
+ * idx [in] The index for the action property
+ *
+ * returns the size of the action property.
+ */
+static uint32_t
+ulp_mapper_act_prop_size_get(uint32_t idx)
+{
+	if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST)
+		return 0;
+	return ulp_act_prop_map_table[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action
+ *
+ * tbl [in] A single table instance to get the results fields
+ * from num_flds [out] The number of data fields in the returned
+ * array
+ * returns array of data fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
+				 uint32_t *num_rslt_flds,
+				 uint32_t *num_encap_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_rslt_flds || !num_encap_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_rslt_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_act_result_field_list[idx];
+}
+
+static int32_t
+ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
+				struct bnxt_ulp_mapper_result_field_info *fld,
+				struct ulp_blob *blob)
+{
+	uint16_t idx, size_idx;
+	uint8_t	 *val = NULL;
+	uint64_t regval;
+	uint32_t val_size = 0, field_size = 0;
+
+	switch (fld->result_opcode) {
+	case BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT:
+		val = fld->result_operand;
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "Failed to add field\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+		field_size = ulp_mapper_act_prop_size_get(idx);
+		if (fld->field_bit_size < ULP_BYTE_2_BITS(field_size)) {
+			field_size  = field_size -
+			    ((fld->field_bit_size + 7) / 8);
+			val += field_size;
+		}
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+
+		/* get the size index next */
+		if (!ulp_operand_read(&fld->result_operand[sizeof(uint16_t)],
+				      (uint8_t *)&size_idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		size_idx = tfp_be_to_cpu_16(size_idx);
+
+		if (size_idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", size_idx);
+			return -EINVAL;
+		}
+		memcpy(&val_size, &parms->act_prop->act_details[size_idx],
+		       sizeof(uint32_t));
+		val_size = tfp_be_to_cpu_32(val_size);
+		val_size = ULP_BYTE_2_BITS(val_size);
+		ulp_blob_push_encap(blob, val, val_size);
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+
+		idx = tfp_be_to_cpu_16(idx);
+		/* Uninitialized regfile entries return 0 */
+		if (!ulp_regfile_read(parms->regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read oob\n", idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, fld->field_bit_size);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
+ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
+				struct ulp_blob *blob)
+{
+	struct ulp_flow_db_res_params		fid_parms;
+	struct tf_alloc_tbl_entry_parms		alloc_parms = { 0 };
+	struct tf_free_tbl_entry_parms		free_parms = { 0 };
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls = parms->atbls;
+	int32_t					rc = 0;
+	int32_t trc;
+	uint64_t				idx;
+
+	/* Set the allocation parameters for the table*/
+	alloc_parms.dir = atbls->direction;
+	alloc_parms.type = atbls->table_type;
+	alloc_parms.search_enable = atbls->srch_b4_alloc;
+	alloc_parms.result = ulp_blob_data_get(blob,
+					       &alloc_parms.result_sz_in_bytes);
+	if (!alloc_parms.result) {
+		BNXT_TF_DBG(ERR, "blob is not populated\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_tbl_entry(parms->tfp, &alloc_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "table type= [%d] dir = [%s] alloc failed\n",
+			    alloc_parms.type,
+			    (alloc_parms.dir == TF_DIR_RX) ? "RX" : "TX");
+		return rc;
+	}
+
+	/* Need to calculate the idx for the result record */
+	/*
+	 * TBD: Need to get the stride from tflib instead of having to
+	 * understand the constrution of the pointer
+	 */
+	uint64_t tmpidx = alloc_parms.idx;
+
+	if (atbls->table_type == TF_TBL_TYPE_EXT)
+		tmpidx = (alloc_parms.idx * TF_ACTION_RECORD_SZ) >> 4;
+	else
+		tmpidx = alloc_parms.idx;
+
+	idx = tfp_cpu_to_be_64(tmpidx);
+
+	/* Store the allocated index for future use in the regfile */
+	rc = ulp_regfile_write(parms->regfile, atbls->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "regfile[%d] write failed\n",
+			    atbls->regfile_wr_idx);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/*
+	 * The set_tbl_entry API if search is not enabled or searched entry
+	 * is not found.
+	 */
+	if (!atbls->srch_b4_alloc || !alloc_parms.hit) {
+		struct tf_set_tbl_entry_parms set_parm = { 0 };
+		uint16_t	length;
+
+		set_parm.dir	= atbls->direction;
+		set_parm.type	= atbls->table_type;
+		set_parm.idx	= alloc_parms.idx;
+		set_parm.data	= ulp_blob_data_get(blob, &length);
+		set_parm.data_sz_in_bytes = length / 8;
+
+		if (set_parm.type == TF_TBL_TYPE_EXT)
+			bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+							&set_parm.tbl_scope_id);
+		else
+			set_parm.tbl_scope_id = 0;
+
+		/* set the table entry */
+		rc = tf_set_tbl_entry(parms->tfp, &set_parm);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "table[%d][%s][%d] set failed\n",
+				    set_parm.type,
+				    (set_parm.dir == TF_DIR_RX) ? "RX" : "TX",
+				    set_parm.idx);
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= atbls->direction;
+	fid_parms.resource_func		= atbls->resource_func;
+	fid_parms.resource_type		= atbls->table_type;
+	fid_parms.resource_hndl		= alloc_parms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	return 0;
+error:
+
+	free_parms.dir	= alloc_parms.dir;
+	free_parms.type	= alloc_parms.type;
+	free_parms.idx	= alloc_parms.idx;
+
+	trc = tf_free_tbl_entry(parms->tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free table entry on failure\n");
+
+	return rc;
+}
+
+/*
+ * Function to process the action Info. Iterate through the list
+ * action info templates and process it.
+ */
+static int32_t
+ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
+			       struct bnxt_ulp_mapper_act_tbl_info *tbl)
+{
+	struct ulp_blob					blob;
+	struct bnxt_ulp_mapper_result_field_info	*flds, *fld;
+	uint32_t					num_flds = 0;
+	uint32_t					encap_flds = 0;
+	uint32_t					i;
+	int32_t						rc;
+	uint16_t					bit_size;
+
+	if (!tbl || !parms->act_prop || !parms->act_bitmap || !parms->regfile)
+		return -EINVAL;
+
+	/* use the max size if encap is enabled */
+	if (tbl->encap_num_fields)
+		bit_size = BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS;
+	else
+		bit_size = tbl->result_bit_size;
+	if (!ulp_blob_init(&blob, bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "action blob init failed\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_act_result_fields_get(tbl, &num_flds, &encap_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < (num_flds + encap_flds); i++) {
+		fld = &flds[i];
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &blob);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Action field failed\n");
+			return rc;
+		}
+		/* set the swap index if 64 bit swap is enabled */
+		if (parms->encap_byte_swap && encap_flds) {
+			if ((i + 1) == num_flds)
+				ulp_blob_encap_swap_idx_set(&blob);
+			/* if 64 bit swap is enabled perform the 64bit swap */
+			if ((i + 1) == (num_flds + encap_flds))
+				ulp_blob_perform_encap_swap(&blob);
+		}
+	}
+
+	rc = ulp_mapper_action_alloc_and_set(parms, &blob);
+	return rc;
+}
+
+/*
+ * Function to process the action template. Iterate through the list
+ * action info templates and process it.
+ */
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms->atbls || !parms->num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for template[%d][%d].\n",
+			    parms->dev_id, parms->act_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_atbls; i++) {
+		rc = ulp_mapper_action_info_process(parms, &parms->atbls[i]);
+		if (rc)
+			return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
new file mode 100644
index 0000000..adbcec2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MAPPER_H_
+#define _ULP_MAPPER_H_
+
+#include <tf_core.h>
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_ulp.h"
+#include "ulp_utils.h"
+
+/* Internal Structure for passing the arguments around */
+struct bnxt_ulp_mapper_parms {
+	uint32_t				dev_id;
+	enum bnxt_ulp_byte_order		order;
+	uint32_t				act_tid;
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls;
+	uint32_t				num_atbls;
+	uint32_t				class_tid;
+	struct bnxt_ulp_mapper_class_tbl_info	*ctbls;
+	uint32_t				num_ctbls;
+	struct ulp_rte_act_prop			*act_prop;
+	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_field		*hdr_field;
+	struct ulp_regfile			*regfile;
+	struct tf				*tfp;
+	struct bnxt_ulp_context			*ulp_ctx;
+	uint8_t					encap_byte_swap;
+	uint32_t				fid;
+	enum bnxt_ulp_flow_db_tables		tbl_idx;
+};
+
+#endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index bc0ffd3..75bf967 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,88 @@
 #include "ulp_template_db.h"
 #include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
+uint32_t ulp_act_prop_map_table[] = {
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_VNIC] =
+		BNXT_ULP_ACT_PROP_SZ_VNIC,
+	[BNXT_ULP_ACT_PROP_IDX_VPORT] =
+		BNXT_ULP_ACT_PROP_SZ_VPORT,
+	[BNXT_ULP_ACT_PROP_IDX_MARK] =
+		BNXT_ULP_ACT_PROP_SZ_MARK,
+	[BNXT_ULP_ACT_PROP_IDX_COUNT] =
+		BNXT_ULP_ACT_PROP_SZ_COUNT,
+	[BNXT_ULP_ACT_PROP_IDX_METER] =
+		BNXT_ULP_ACT_PROP_SZ_METER,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN,
+	[BNXT_ULP_ACT_PROP_IDX_LAST] =
+		BNXT_ULP_ACT_PROP_SZ_LAST
+};
 
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
@@ -26,3 +108,164 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {BNXT_ULP_SYM_DECAP_FUNC_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP,
+	.result_operand = {(BNXT_ULP_ACT_PROP_IDX_VNIC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 1eed828..e52cc3f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -39,9 +39,113 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_LAST
 };
 
+enum bnxt_ulp_resource_func {
+	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0,
+	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 1,
+	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 2,
+	BNXT_ULP_RESOURCE_FUNC_IDENTIFIER = 3,
+	BNXT_ULP_RESOURCE_FUNC_HW_FID = 4,
+	BNXT_ULP_RESOURCE_FUNC_LAST = 5
+};
+
+enum bnxt_ulp_result_opc {
+	BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP = 1,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ = 2,
+	BNXT_ULP_RESULT_OPC_SET_TO_REGFILE = 3,
+	BNXT_ULP_RESULT_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_act_prop_sz {
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_VNIC = 4,
+	BNXT_ULP_ACT_PROP_SZ_VPORT = 4,
+	BNXT_ULP_ACT_PROP_SZ_MARK = 4,
+	BNXT_ULP_ACT_PROP_SZ_COUNT = 4,
+	BNXT_ULP_ACT_PROP_SZ_METER = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC = 8,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST = 8,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7 = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG = 8,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP = 32,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN = 32,
+	BNXT_ULP_ACT_PROP_SZ_LAST = 4
+};
+
+enum bnxt_ulp_act_prop_idx {
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ = 0,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ = 8,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE = 12,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM = 16,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE = 20,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM = 24,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM = 28,
+	BNXT_ULP_ACT_PROP_IDX_VNIC = 32,
+	BNXT_ULP_ACT_PROP_IDX_VPORT = 36,
+	BNXT_ULP_ACT_PROP_IDX_MARK = 40,
+	BNXT_ULP_ACT_PROP_IDX_COUNT = 44,
+	BNXT_ULP_ACT_PROP_IDX_METER = 48,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC = 52,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST = 60,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN = 68,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP = 72,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID = 76,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC = 80,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST = 84,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 88,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 104,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 120,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 124,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 128,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 132,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 136,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 140,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 144,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 148,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 152,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 156,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 160,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 166,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 172,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 180,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 212,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 228,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 232,
+	BNXT_ULP_ACT_PROP_IDX_LAST = 264
+};
+
 #endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4b9d0b2..2b0a3d7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,7 +17,15 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
-/* Device specific parameters. */
+/*
+ * structure to hold the action property details
+ * It is a array of 128 bytes
+ */
+struct ulp_rte_act_prop {
+	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
+};
+
+/* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
 	uint32_t			global_fid_enable;
@@ -31,10 +39,44 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_act_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	enum tf_tbl_type table_type;
+	uint8_t		direction;
+	uint8_t		srch_b4_alloc;
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	encap_num_fields;
+	uint16_t	result_num_fields;
+
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+struct bnxt_ulp_mapper_result_field_info {
+	uint8_t				name[64];
+	enum bnxt_ulp_result_opc	result_opcode;
+	uint16_t			field_bit_size;
+	uint8_t				result_operand[16];
+};
+
 /*
- * The ulp_device_params is indexed by the dev_id.
- * This table maintains the device specific parameters.
+ * The ulp_device_params is indexed by the dev_id
+ * This table maintains the device specific parameters
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
+ * record.  It uses the same structure as the result list, but is only used for
+ * actions.
+ */
+extern
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * properties.
+ */
+extern uint32_t ulp_act_prop_map_table[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 20/34] net/bnxt: add support to process key tables
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (18 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
                     ` (15 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch creates the classifier table entries for a flow.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 773 +++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  80 ++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  18 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 896 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       | 142 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h | 133 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  93 ++-
 7 files changed, 2127 insertions(+), 8 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 9cfc382..a041394 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -19,6 +19,9 @@
 int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
 
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -37,10 +40,65 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 /*
  * Get the list of result fields that implement the flow action
  *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the key fields from
+ *
+ * num_flds [out] The number of key fields in the returned array
+ *
+ * returns array of Key fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_class_key_field_info *
+ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			  uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->key_start_idx;
+	*num_flds	= tbl->key_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_key_field_list[idx];
+}
+
+/*
+ * Get the list of data fields that implement the flow.
+ *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the data fields from
+ *
+ * num_flds [out] The number of data fields in the returned array.
+ *
+ * Returns array of data fields, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			     uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_flds	= tbl->result_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_result_field_list[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action.
+ *
  * tbl [in] A single table instance to get the results fields
  * from num_flds [out] The number of data fields in the returned
- * array
- * returns array of data fields, or NULL on error
+ * array.
+ *
+ * Returns array of data fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
@@ -60,6 +118,106 @@ ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
 	return &ulp_act_result_field_list[idx];
 }
 
+/*
+ * Get the list of ident fields that implement the flow
+ *
+ * tbl [in] A single table instance to get the ident fields from
+ *
+ * num_flds [out] The number of ident fields in the returned array
+ *
+ * returns array of ident fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_ident_info *
+ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			    uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx = tbl->ident_start_idx;
+	*num_flds = tbl->ident_nums;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_ident_list[idx];
+}
+
+static int32_t
+ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
+			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			 struct bnxt_ulp_mapper_ident_info *ident)
+{
+	struct ulp_flow_db_res_params	fid_parms;
+	uint64_t id = 0;
+	int32_t idx;
+	struct tf_alloc_identifier_parms iparms = { 0 };
+	struct tf_free_identifier_parms free_parms = { 0 };
+	struct tf *tfp;
+	int rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get tf pointer\n");
+		return -EINVAL;
+	}
+
+	idx = ident->regfile_wr_idx;
+
+	iparms.ident_type = ident->ident_type;
+	iparms.dir = tbl->direction;
+
+	rc = tf_alloc_identifier(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc ident %s:%d failed.\n",
+			    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iparms.ident_type);
+		return rc;
+	}
+
+	id = (uint64_t)tfp_cpu_to_be_64(iparms.id);
+	if (!ulp_regfile_write(parms->regfile, idx, id)) {
+		BNXT_TF_DBG(ERR, "Regfile[%d] write failed.\n", idx);
+		rc = -EINVAL;
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func	= ident->resource_func;
+	fid_parms.resource_type	= ident->ident_type;
+	fid_parms.resource_hndl	= iparms.id;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+
+error:
+	/* Need to free the identifier */
+	free_parms.dir		= tbl->direction;
+	free_parms.ident_type	= ident->ident_type;
+	free_parms.id		= iparms.id;
+
+	(void)tf_free_identifier(tfp, &free_parms);
+
+	BNXT_TF_DBG(ERR, "Ident process failed for %s:%s\n",
+		    ident->name,
+		    (tbl->direction == TF_DIR_RX) ? "RX" : "TX");
+	return rc;
+}
+
 static int32_t
 ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 				struct bnxt_ulp_mapper_result_field_info *fld,
@@ -163,6 +321,89 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 
 /* Function to alloc action record and set the table. */
 static int32_t
+ulp_mapper_keymask_field_process(struct bnxt_ulp_mapper_parms *parms,
+				 struct bnxt_ulp_mapper_class_key_field_info *f,
+				 struct ulp_blob *blob,
+				 uint8_t is_key)
+{
+	uint64_t regval;
+	uint16_t idx, bitlen;
+	uint32_t opcode;
+	uint8_t *operand;
+	struct ulp_regfile *regfile = parms->regfile;
+	uint8_t *val = NULL;
+	struct bnxt_ulp_mapper_class_key_field_info *fld = f;
+
+	if (is_key) {
+		operand = fld->spec_operand;
+		opcode	= fld->spec_opcode;
+	} else {
+		operand = fld->mask_operand;
+		opcode	= fld->mask_opcode;
+	}
+
+	bitlen = fld->field_bit_size;
+
+	switch (opcode) {
+	case BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT:
+		val = operand;
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_ADD_PAD:
+		if (!ulp_blob_pad_push(blob, bitlen)) {
+			BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+			return -EINVAL;
+		}
+
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+		if (is_key)
+			val = parms->hdr_field[idx].spec;
+		else
+			val = parms->hdr_field[idx].mask;
+
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (!ulp_regfile_read(regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read failed.\n",
+				    idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, bitlen);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
 ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
 				struct ulp_blob *blob)
 {
@@ -338,6 +579,489 @@ ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			    struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct ulp_blob key, mask, data;
+	uint32_t i, num_kflds;
+	struct tf *tfp;
+	int32_t rc, trc;
+	struct tf_alloc_tcam_entry_parms aparms		= { 0 };
+	struct tf_set_tcam_entry_parms sparms		= { 0 };
+	struct ulp_flow_db_res_params	fid_parms	= { 0 };
+	struct tf_free_tcam_entry_parms free_parms	= { 0 };
+	uint32_t hit = 0;
+	uint16_t tmplen = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get truflow pointer\n");
+		return -EINVAL;
+	}
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	if (!ulp_blob_init(&key, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&mask, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key/mask */
+	/*
+	 * NOTE: The WC table will require some kind of flag to handle the
+	 * mode bits within the key/mask
+	 */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+
+		/* Setup the mask */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &mask, 0);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Mask field set failed.\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.tcam_tbl_type	= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.key_sz_in_bits	= tbl->key_bit_size;
+	aparms.key		= ulp_blob_data_get(&key, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Key len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Mask len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.priority		= tbl->priority;
+
+	/*
+	 * All failures after this succeeds require the entry to be freed.
+	 * cannot return directly on failure, but needs to goto error
+	 */
+	rc = tf_alloc_tcam_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "tcam alloc failed rc=%d.\n", rc);
+		return rc;
+	}
+
+	hit = aparms.hit;
+
+	/* Build the result */
+	if (!tbl->srch_b4_alloc || !hit) {
+		struct bnxt_ulp_mapper_result_field_info *dflds;
+		struct bnxt_ulp_mapper_ident_info *idents;
+		uint32_t num_dflds, num_idents;
+
+		/* Alloc identifiers */
+		idents = ulp_mapper_ident_fields_get(tbl, &num_idents);
+
+		for (i = 0; i < num_idents; i++) {
+			rc = ulp_mapper_ident_process(parms, tbl, &idents[i]);
+
+			/* Already logged the error, just return */
+			if (rc)
+				goto error;
+		}
+
+		/* Create the result data blob */
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+		if (!dflds || !num_dflds) {
+			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+			rc = -EINVAL;
+			goto error;
+		}
+
+		for (i = 0; i < num_dflds; i++) {
+			rc = ulp_mapper_result_field_process(parms,
+							     &dflds[i],
+							     &data);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Failed to set data fields\n");
+				goto error;
+			}
+		}
+
+		sparms.dir		= aparms.dir;
+		sparms.tcam_tbl_type	= aparms.tcam_tbl_type;
+		sparms.idx		= aparms.idx;
+		/* Already verified the key/mask lengths */
+		sparms.key		= ulp_blob_data_get(&key, &tmplen);
+		sparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+		sparms.key_sz_in_bits	= tbl->key_bit_size;
+		sparms.result		= ulp_blob_data_get(&data, &tmplen);
+
+		if (tbl->result_bit_size != tmplen) {
+			BNXT_TF_DBG(ERR, "Result len (%d) != Expected (%d)\n",
+				    tmplen, tbl->result_bit_size);
+			rc = -EINVAL;
+			goto error;
+		}
+		sparms.result_sz_in_bits = tbl->result_bit_size;
+
+		rc = tf_set_tcam_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "tcam[%d][%s][%d] write failed.\n",
+				    sparms.tcam_tbl_type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx);
+			goto error;
+		}
+	} else {
+		BNXT_TF_DBG(ERR, "Not supporting search before alloc now\n");
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	fid_parms.direction = tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.critical_resource = tbl->critical_resource;
+	fid_parms.resource_hndl	= aparms.idx;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir			= tbl->direction;
+	free_parms.tcam_tbl_type	= tbl->table_type;
+	free_parms.idx			= aparms.idx;
+	trc = tf_free_tcam_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tcam[%d][%d][%d] on failure\n",
+			    tbl->table_type, tbl->direction, aparms.idx);
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct bnxt_ulp_mapper_result_field_info *dflds;
+	struct ulp_blob key, data;
+	uint32_t i, num_kflds, num_dflds;
+	uint16_t tmplen;
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	struct ulp_rte_act_prop	 *a_prop = parms->act_prop;
+	struct ulp_flow_db_res_params	fid_parms = { 0 };
+	struct tf_insert_em_entry_parms iparms = { 0 };
+	struct tf_delete_em_entry_parms free_parms = { 0 };
+	int32_t	trc;
+	int32_t rc = 0;
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	/* Initialize the key/result blobs */
+	if (!ulp_blob_init(&key, tbl->blob_key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * TBD: Normally should process identifiers in case of using recycle or
+	 * loopback.  Not supporting recycle for now.
+	 */
+
+	/* Create the result data blob */
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+	if (!dflds || !num_dflds) {
+		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_dflds; i++) {
+		struct bnxt_ulp_mapper_result_field_info *fld;
+
+		fld = &dflds[i];
+
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to set data fields.\n");
+			return rc;
+		}
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+					     &iparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	/*
+	 * NOTE: the actual blob size will differ from the size in the tbl
+	 * entry due to the padding.
+	 */
+	iparms.dup_check		= 0;
+	iparms.dir			= tbl->direction;
+	iparms.mem			= tbl->mem;
+	iparms.key			= ulp_blob_data_get(&key, &tmplen);
+	iparms.key_sz_in_bits		= tbl->key_bit_size;
+	iparms.em_record		= ulp_blob_data_get(&data, &tmplen);
+	iparms.em_record_sz_in_bits	= tbl->result_bit_size;
+
+	rc = tf_insert_em_entry(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to insert em entry rc=%d.\n", rc);
+		return rc;
+	}
+
+	if (tbl->mark_enable &&
+	    ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		uint32_t val, mark, gfid, flag;
+		/* TBD: Need to determine if GFID is enabled globally */
+		if (sizeof(val) != BNXT_ULP_ACT_PROP_SZ_MARK) {
+			BNXT_TF_DBG(ERR, "Mark size (%d) != expected (%ld)\n",
+				    BNXT_ULP_ACT_PROP_SZ_MARK, sizeof(val));
+			rc = -EINVAL;
+			goto error;
+		}
+
+		memcpy(&val,
+		       &a_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       sizeof(val));
+
+		mark = tfp_be_to_cpu_32(val);
+
+		TF_GET_GFID_FROM_FLOW_ID(iparms.flow_id, gfid);
+		TF_GET_FLAG_FROM_FLOW_ID(iparms.flow_id, flag);
+
+		rc = ulp_mark_db_mark_add(parms->ulp_ctx,
+					  (flag == TF_GFID_TABLE_EXTERNAL),
+					  gfid,
+					  mark);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to add mark to flow\n");
+			goto error;
+		}
+
+		/*
+		 * Link the mark resource to the flow in the flow db
+		 * The mark is never the critical resource, so it is 0.
+		 */
+		memset(&fid_parms, 0, sizeof(fid_parms));
+		fid_parms.direction	= tbl->direction;
+		fid_parms.resource_func	= BNXT_ULP_RESOURCE_FUNC_HW_FID;
+		fid_parms.resource_type	= tbl->table_type;
+		fid_parms.resource_hndl	= iparms.flow_id;
+		fid_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+					      parms->tbl_idx,
+					      parms->fid,
+					      &fid_parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+				    rc);
+			/* Need to free the identifier, so goto error */
+			goto error;
+		}
+	}
+
+	/* Link the EM resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func		= tbl->resource_func;
+	fid_parms.resource_type		= tbl->table_type;
+	fid_parms.critical_resource	= tbl->critical_resource;
+	fid_parms.resource_hndl		= iparms.flow_handle;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir		= iparms.dir;
+	free_parms.mem		= iparms.mem;
+	free_parms.tbl_scope_id	= iparms.tbl_scope_id;
+	free_parms.flow_handle	= iparms.flow_handle;
+
+	trc = tf_delete_em_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to delete EM entry on failed add\n");
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			     struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_flow_db_res_params	fid_parms;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0, trc = 0;
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_set_tbl_entry_parms	sparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+
+	if (!ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_flds; i++) {
+		rc = ulp_mapper_result_field_process(parms,
+						     &flds[i],
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.type		= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.result		= ulp_blob_data_get(&data, &tmplen);
+	aparms.result_sz_in_bytes = ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+
+	/* All failures after the alloc succeeds require a free */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc table[%d][%s] failed rc=%d\n",
+			    tbl->table_type,
+			    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+			    rc);
+		return rc;
+	}
+
+	/* Always storing values in Regfile in BE */
+	idx = tfp_cpu_to_be_64(aparms.idx);
+	rc = ulp_regfile_write(parms->regfile, tbl->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+			    tbl->regfile_wr_idx);
+		goto error;
+	}
+
+	if (!tbl->srch_b4_alloc) {
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->table_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes =
+			ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+		sparms.idx		= aparms.idx;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+				    tbl->table_type,
+				    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction	= tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.resource_hndl	= aparms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		goto error;
+	}
+
+	return rc;
+error:
+	/*
+	 * Free the allocated resource since we failed to either
+	 * write to the entry or link the flow
+	 */
+	free_parms.dir	= tbl->direction;
+	free_parms.type	= tbl->table_type;
+	free_parms.idx	= aparms.idx;
+
+	trc = tf_free_tbl_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tbl entry on failure\n");
+
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -362,3 +1086,48 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+/* Create the classifier table entries for a flow. */
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms)
+		return -EINVAL;
+
+	if (!parms->ctbls || !parms->num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for template[%d][%d].\n",
+			    parms->dev_id, parms->class_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_ctbls; i++) {
+		struct bnxt_ulp_mapper_class_tbl_info *tbl = &parms->ctbls[i];
+
+		switch (tbl->resource_func) {
+		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+			rc = ulp_mapper_em_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_index_tbl_process(parms, tbl);
+			break;
+		default:
+			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
+				    tbl->resource_func);
+			return -EINVAL;
+		}
+
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+				    tbl->resource_func);
+			return rc;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 9e4307e..837064e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -6,14 +6,71 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_log.h>
+#include "bnxt.h"
 #include "bnxt_ulp.h"
 #include "tf_ext_flow_handle.h"
 #include "ulp_mark_mgr.h"
 #include "bnxt_tf_common.h"
-#include "../bnxt.h"
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+static inline uint32_t
+ulp_mark_db_idx_get(bool is_gfid, uint32_t fid, struct bnxt_ulp_mark_tbl *mtbl)
+{
+	uint32_t idx = 0, hashtype = 0;
+
+	if (is_gfid) {
+		TF_GET_HASH_TYPE_FROM_GFID(fid, hashtype);
+		TF_GET_HASH_INDEX_FROM_GFID(fid, idx);
+
+		/* Need to truncate anything beyond supported flows */
+		idx &= mtbl->gfid_mask;
+
+		if (hashtype)
+			idx |= mtbl->gfid_type_bit;
+	} else {
+		idx = fid;
+	}
+
+	return idx;
+}
+
+static int32_t
+ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t mark)
+{
+	struct		bnxt_ulp_mark_tbl *mtbl;
+	uint32_t	idx = 0;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark DB\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+
+		mtbl->gfid_tbl[idx].mark_id = mark;
+		mtbl->gfid_tbl[idx].valid = true;
+	} else {
+		/* For the LFID, the FID is used as the index */
+		mtbl->lfid_tbl[fid].mark_id = mark;
+		mtbl->lfid_tbl[fid].valid = true;
+	}
+
+	return 0;
+}
+
 /*
  * Allocate and Initialize all Mark Manager resources for this ulp context.
  *
@@ -117,3 +174,24 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 
 	return 0;
 }
+
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 5948683..18abea4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -54,4 +54,22 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 75bf967..ba06493 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -108,6 +108,902 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_class_tbl_info ulp_class_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 0,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.ident_start_idx = 0,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 13,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.ident_start_idx = 1,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.table_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 55,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 197,
+	.key_num_fields = 11,
+	.result_start_idx = 21,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_YES,
+	.critical_resource = 1,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	}
+};
+
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_DMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_DMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L3_HDR_TYPE_IPV4,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L2_HDR_TYPE_DIX,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_PKT_TYPE_L2,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_ADD_PAD,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_SMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_REGFILE,
+	.spec_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00fd >> 8) & 0xff,
+		0x00fd & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x15, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 33,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00c5 >> 8) & 0xff,
+		0x00c5 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_L2_CTXT,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0,
+	.ident_bit_size = 10,
+	.ident_bit_pos = 54
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_EM_PROF,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0,
+	.ident_bit_size = 8,
+	.ident_bit_pos = 2
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index e52cc3f..733836a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,37 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 
+enum bnxt_ulp_action_bit {
+	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
+	BNXT_ULP_ACTION_BIT_DROP             = 0x0000000000000002,
+	BNXT_ULP_ACTION_BIT_COUNT            = 0x0000000000000004,
+	BNXT_ULP_ACTION_BIT_RSS              = 0x0000000000000008,
+	BNXT_ULP_ACTION_BIT_METER            = 0x0000000000000010,
+	BNXT_ULP_ACTION_BIT_VNIC             = 0x0000000000000020,
+	BNXT_ULP_ACTION_BIT_VPORT            = 0x0000000000000040,
+	BNXT_ULP_ACTION_BIT_VXLAN_DECAP      = 0x0000000000000080,
+	BNXT_ULP_ACTION_BIT_NVGRE_DECAP      = 0x0000000000000100,
+	BNXT_ULP_ACTION_BIT_OF_POP_MPLS      = 0x0000000000000200,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_MPLS     = 0x0000000000000400,
+	BNXT_ULP_ACTION_BIT_MAC_SWAP         = 0x0000000000000800,
+	BNXT_ULP_ACTION_BIT_SET_MAC_SRC      = 0x0000000000001000,
+	BNXT_ULP_ACTION_BIT_SET_MAC_DST      = 0x0000000000002000,
+	BNXT_ULP_ACTION_BIT_OF_POP_VLAN      = 0x0000000000004000,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_VLAN     = 0x0000000000008000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_PCP  = 0x0000000000010000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_VID  = 0x0000000000020000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_SRC     = 0x0000000000040000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_DST     = 0x0000000000080000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_SRC     = 0x0000000000100000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_DST     = 0x0000000000200000,
+	BNXT_ULP_ACTION_BIT_DEC_TTL          = 0x0000000000400000,
+	BNXT_ULP_ACTION_BIT_SET_TP_SRC       = 0x0000000000800000,
+	BNXT_ULP_ACTION_BIT_SET_TP_DST       = 0x0000000001000000,
+	BNXT_ULP_ACTION_BIT_VXLAN_ENCAP      = 0x0000000002000000,
+	BNXT_ULP_ACTION_BIT_NVGRE_ENCAP      = 0x0000000004000000,
+	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -35,8 +66,48 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_mark_enable {
+	BNXT_ULP_MARK_ENABLE_NO = 0,
+	BNXT_ULP_MARK_ENABLE_YES = 1,
+	BNXT_ULP_MARK_ENABLE_LAST = 2
+};
+
+enum bnxt_ulp_mask_opc {
+	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_MASK_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_MASK_OPC_ADD_PAD = 3,
+	BNXT_ULP_MASK_OPC_LAST = 4
+};
+
+enum bnxt_ulp_priority {
+	BNXT_ULP_PRIORITY_LEVEL_0 = 0,
+	BNXT_ULP_PRIORITY_LEVEL_1 = 1,
+	BNXT_ULP_PRIORITY_LEVEL_2 = 2,
+	BNXT_ULP_PRIORITY_LEVEL_3 = 3,
+	BNXT_ULP_PRIORITY_LEVEL_4 = 4,
+	BNXT_ULP_PRIORITY_LEVEL_5 = 5,
+	BNXT_ULP_PRIORITY_LEVEL_6 = 6,
+	BNXT_ULP_PRIORITY_LEVEL_7 = 7,
+	BNXT_ULP_PRIORITY_NOT_USED = 8,
+	BNXT_ULP_PRIORITY_LAST = 9
+};
+
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_LAST
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 0,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 1,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN = 8,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 9,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 10,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 11,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 12,
+	BNXT_ULP_REGFILE_INDEX_LAST = 13
 };
 
 enum bnxt_ulp_resource_func {
@@ -56,9 +127,78 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_spec_opc {
+	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_SPEC_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_SPEC_OPC_ADD_PAD = 3,
+	BNXT_ULP_SPEC_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_NO = 0,
+	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
new file mode 100644
index 0000000..1f58ace
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved_
+ */
+
+/* date: Mon Mar  9 02:37:53 2020
+ * version: 0_0
+ */
+
+#ifndef _ULP_HDR_FIELD_ENUMS_H_
+#define _ULP_HDR_FIELD_ENUMS_H_
+
+/* class_template_id = 0: ingress flow */
+enum bnxt_ulp_hf0 {
+	BNXT_ULP_HF0_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF0_O_VTAG_NUM = 1,
+	BNXT_ULP_HF0_I_VTAG_NUM = 2,
+	BNXT_ULP_HF0_SVIF_INDEX = 3,
+	BNXT_ULP_HF0_O_ETH_DMAC = 4,
+	BNXT_ULP_HF0_O_ETH_SMAC = 5,
+	BNXT_ULP_HF0_O_ETH_TYPE = 6,
+	BNXT_ULP_HF0_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF0_OO_VLAN_VID = 8,
+	BNXT_ULP_HF0_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF0_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF0_OI_VLAN_VID = 11,
+	BNXT_ULP_HF0_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF0_O_IPV4_VER = 13,
+	BNXT_ULP_HF0_O_IPV4_TOS = 14,
+	BNXT_ULP_HF0_O_IPV4_LEN = 15,
+	BNXT_ULP_HF0_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF0_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF0_O_IPV4_TTL = 18,
+	BNXT_ULP_HF0_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF0_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF0_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF0_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF0_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF0_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF0_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF0_O_UDP_CSUM = 26,
+	BNXT_ULP_HF0_T_VXLAN_FLAGS = 27,
+	BNXT_ULP_HF0_T_VXLAN_RSVD0 = 28,
+	BNXT_ULP_HF0_T_VXLAN_VNI = 29,
+	BNXT_ULP_HF0_T_VXLAN_RSVD1 = 30,
+	BNXT_ULP_HF0_I_ETH_DMAC = 31,
+	BNXT_ULP_HF0_I_ETH_SMAC = 32,
+	BNXT_ULP_HF0_I_ETH_TYPE = 33,
+	BNXT_ULP_HF0_IO_VLAN_CFI_PRI = 34,
+	BNXT_ULP_HF0_IO_VLAN_VID = 35,
+	BNXT_ULP_HF0_IO_VLAN_TYPE = 36,
+	BNXT_ULP_HF0_II_VLAN_CFI_PRI = 37,
+	BNXT_ULP_HF0_II_VLAN_VID = 38,
+	BNXT_ULP_HF0_II_VLAN_TYPE = 39,
+	BNXT_ULP_HF0_I_IPV4_VER = 40,
+	BNXT_ULP_HF0_I_IPV4_TOS = 41,
+	BNXT_ULP_HF0_I_IPV4_LEN = 42,
+	BNXT_ULP_HF0_I_IPV4_FRAG_ID = 43,
+	BNXT_ULP_HF0_I_IPV4_FRAG_OFF = 44,
+	BNXT_ULP_HF0_I_IPV4_TTL = 45,
+	BNXT_ULP_HF0_I_IPV4_NEXT_PID = 46,
+	BNXT_ULP_HF0_I_IPV4_CSUM = 47,
+	BNXT_ULP_HF0_I_IPV4_SRC_ADDR = 48,
+	BNXT_ULP_HF0_I_IPV4_DST_ADDR = 49,
+	BNXT_ULP_HF0_I_UDP_SRC_PORT = 50,
+	BNXT_ULP_HF0_I_UDP_DST_PORT = 51,
+	BNXT_ULP_HF0_I_UDP_LENGTH = 52,
+	BNXT_ULP_HF0_I_UDP_CSUM = 53
+};
+
+/* class_template_id = 1: egress flow */
+enum bnxt_ulp_hf1 {
+	BNXT_ULP_HF1_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF1_O_VTAG_NUM = 1,
+	BNXT_ULP_HF1_I_VTAG_NUM = 2,
+	BNXT_ULP_HF1_SVIF_INDEX = 3,
+	BNXT_ULP_HF1_O_ETH_DMAC = 4,
+	BNXT_ULP_HF1_O_ETH_SMAC = 5,
+	BNXT_ULP_HF1_O_ETH_TYPE = 6,
+	BNXT_ULP_HF1_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF1_OO_VLAN_VID = 8,
+	BNXT_ULP_HF1_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF1_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF1_OI_VLAN_VID = 11,
+	BNXT_ULP_HF1_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF1_O_IPV4_VER = 13,
+	BNXT_ULP_HF1_O_IPV4_TOS = 14,
+	BNXT_ULP_HF1_O_IPV4_LEN = 15,
+	BNXT_ULP_HF1_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF1_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF1_O_IPV4_TTL = 18,
+	BNXT_ULP_HF1_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF1_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF1_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF1_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF1_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF1_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF1_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF1_O_UDP_CSUM = 26
+};
+
+/* class_template_id = 2: ingress flow */
+enum bnxt_ulp_hf2 {
+	BNXT_ULP_HF2_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF2_O_VTAG_NUM = 1,
+	BNXT_ULP_HF2_I_VTAG_NUM = 2,
+	BNXT_ULP_HF2_SVIF_INDEX = 3,
+	BNXT_ULP_HF2_O_ETH_DMAC = 4,
+	BNXT_ULP_HF2_O_ETH_SMAC = 5,
+	BNXT_ULP_HF2_O_ETH_TYPE = 6,
+	BNXT_ULP_HF2_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF2_OO_VLAN_VID = 8,
+	BNXT_ULP_HF2_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF2_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF2_OI_VLAN_VID = 11,
+	BNXT_ULP_HF2_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF2_O_IPV4_VER = 13,
+	BNXT_ULP_HF2_O_IPV4_TOS = 14,
+	BNXT_ULP_HF2_O_IPV4_LEN = 15,
+	BNXT_ULP_HF2_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF2_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF2_O_IPV4_TTL = 18,
+	BNXT_ULP_HF2_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF2_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF2_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF2_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF2_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF2_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF2_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF2_O_UDP_CSUM = 26
+};
+
+#endif /* _ULP_HDR_FIELD_ENUMS_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 2b0a3d7..e28d049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,9 +17,21 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Structure to store the protocol fields */
+#define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
+struct ulp_rte_hdr_field {
+	uint8_t		spec[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint8_t		mask[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint32_t	size;
+};
+
+struct ulp_rte_act_bitmap {
+	uint64_t	bits;
+};
+
 /*
- * structure to hold the action property details
- * It is a array of 128 bytes
+ * Structure to hold the action property details.
+ * It is a array of 128 bytes.
  */
 struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
@@ -39,6 +51,35 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_class_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	uint32_t	table_type;
+	uint8_t		direction;
+	uint8_t		mem;
+	uint32_t	priority;
+	uint8_t		srch_b4_alloc;
+	uint32_t	critical_resource;
+
+	/* Information for accessing the ulp_key_field_list */
+	uint32_t	key_start_idx;
+	uint16_t	key_bit_size;
+	uint16_t	key_num_fields;
+	/* Size of the blob that holds the key */
+	uint16_t	blob_key_bit_size;
+
+	/* Information for accessing the ulp_class_result_field_list */
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	result_num_fields;
+
+	/* Information for accessing the ulp_ident_list */
+	uint32_t	ident_start_idx;
+	uint16_t	ident_nums;
+
+	uint8_t		mark_enable;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
 struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	enum tf_tbl_type table_type;
@@ -52,6 +93,15 @@ struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_regfile_index	regfile_wr_idx;
 };
 
+struct bnxt_ulp_mapper_class_key_field_info {
+	uint8_t			name[64];
+	enum bnxt_ulp_mask_opc	mask_opcode;
+	enum bnxt_ulp_spec_opc	spec_opcode;
+	uint16_t		field_bit_size;
+	uint8_t			mask_operand[16];
+	uint8_t			spec_operand[16];
+};
+
 struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				name[64];
 	enum bnxt_ulp_result_opc	result_opcode;
@@ -59,14 +109,36 @@ struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				result_operand[16];
 };
 
+struct bnxt_ulp_mapper_ident_info {
+	uint8_t		name[64];
+	uint32_t	resource_func;
+
+	uint16_t	ident_type;
+	uint16_t	ident_bit_size;
+	uint16_t	ident_bit_pos;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+/*
+ * Flow Mapper Static Data Externs:
+ * Access to the below static data should be done through access functions and
+ * directly throughout the code.
+ */
+
 /*
- * The ulp_device_params is indexed by the dev_id
- * This table maintains the device specific parameters
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
  * The ulp_data_field_list provides the instructions for creating an action
+ * records such as tcam/em results.
+ */
+extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
+
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
  * record.  It uses the same structure as the result list, but is only used for
  * actions.
  */
@@ -75,6 +147,19 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
 
 /*
  * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * tcam and em tables.
+ */
+extern
+struct bnxt_ulp_mapper_class_key_field_info	ulp_class_key_field_list[];
+
+/*
+ * The ulp_ident_list provides the instructions for creating identifiers such
+ * as profile ids.
+ */
+extern struct bnxt_ulp_mapper_ident_info	ulp_ident_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
  * properties.
  */
 extern uint32_t ulp_act_prop_map_table[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 21/34] net/bnxt: add support to free key and action tables
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (19 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
                     ` (14 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets all the flow resources from the flow id
2. Frees all the table resources
3. Frees the flow in the flow table

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c  | 199 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h  |  30 +++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c   | 193 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h   |  15 +++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  23 +++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 +++
 6 files changed, 476 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 0e7b433..8449db3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -23,6 +23,32 @@
 #define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
 
 /*
+ * Helper function to set the bit in the active flow table
+ * No validation is done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ * flag [in] 1 to set and 0 to reset.
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_active_flow_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			    uint32_t			idx,
+			    uint32_t			flag)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	if (flag)
+		ULP_INDEX_BITMAP_SET(flow_tbl->active_flow_tbl[active_index],
+				     idx);
+	else
+		ULP_INDEX_BITMAP_RESET(flow_tbl->active_flow_tbl[active_index],
+				       idx);
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  *  is set.No validation being done in this function.
  *
@@ -71,6 +97,35 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
 }
 
 /*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [in] Ptr to resource information
+ * params [out] The output params to the caller
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	memset(params, 0, sizeof(struct ulp_flow_db_res_params));
+	params->direction = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_DIR_MASK) >>
+				 ULP_FLOW_DB_RES_DIR_BIT);
+	params->resource_func = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_FUNC_MASK) >>
+				 ULP_FLOW_DB_RES_FUNC_BITS);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		params->resource_hndl = resource_info->resource_hndl;
+		params->resource_type = resource_info->resource_type;
+	} else {
+		params->resource_hndl = resource_info->resource_em_handle;
+	}
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  * the stack for allocation operations.
  *
@@ -122,7 +177,7 @@ ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
 }
 
 /*
- * Helper function to de allocate the flow table.
+ * Helper function to deallocate the flow table.
  *
  * flow_db [in] Ptr to flow database structure
  * tbl_idx [in] The index to table creation.
@@ -321,3 +376,145 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Onlythe critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*nxt_resource, *fid_resource;
+	uint32_t			nxt_idx = 0;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx < 0 || tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	fid_resource = &flow_tbl->flow_resources[fid];
+	if (!params->critical_resource) {
+		/* Not the critical resource so free the resource */
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		if (!nxt_idx) {
+			/* reached end of resources */
+			return -ENOENT;
+		}
+		nxt_resource = &flow_tbl->flow_resources[nxt_idx];
+
+		/* connect the fid resource to the next resource */
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_resource->nxt_resource_idx);
+
+		/* update the contents to be given to caller */
+		ulp_flow_db_res_info_to_params(nxt_resource, params);
+
+		/* Delete the nxt_resource */
+		memset(nxt_resource, 0, sizeof(struct ulp_fdb_resource_info));
+
+		/* add it to the free list */
+		flow_tbl->tail_index++;
+		if (flow_tbl->tail_index >= flow_tbl->num_resources) {
+			BNXT_TF_DBG(ERR, "FlowDB:Tail reached max\n");
+			return -ENOENT;
+		}
+		flow_tbl->flow_tbl_stack[flow_tbl->tail_index] = nxt_idx;
+
+	} else {
+		/* Critical resource. copy the contents and exit */
+		ulp_flow_db_res_info_to_params(fid_resource, params);
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		memset(fid_resource, 0, sizeof(struct ulp_fdb_resource_info));
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_idx);
+	}
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx < 0 || tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for limits of fid */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+	flow_tbl->head_index--;
+	if (!flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "FlowDB: Head Ptr is zero\n");
+		return -ENOENT;
+	}
+	flow_tbl->flow_tbl_stack[flow_tbl->head_index] = fid;
+	ulp_flow_db_active_flow_set(flow_tbl, fid, 0);
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index f6055a5..20109b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -99,4 +99,34 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 				 uint32_t			fid,
 				 struct ulp_flow_db_res_params	*params);
 
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Only the critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a041394..1b22720 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -143,6 +143,87 @@ ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
 	return &ulp_ident_list[idx];
 }
 
+static inline int32_t
+ulp_mapper_tcam_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			   struct tf *tfp,
+			   struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tcam_entry_parms fparms = {
+		.dir		= res->direction,
+		.tcam_tbl_type	= res->resource_type,
+		.idx		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_tcam_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			    struct tf *tfp,
+			    struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tbl_entry_parms fparms = {
+		.dir	= res->direction,
+		.type	= res->resource_type,
+		.idx	= (uint32_t)res->resource_hndl
+	};
+
+	return tf_free_tbl_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
+			  struct tf *tfp,
+			  struct ulp_flow_db_res_params *res)
+{
+	struct tf_delete_em_entry_parms fparms = { 0 };
+	int32_t rc;
+
+	fparms.dir		= res->direction;
+	fparms.mem		= TF_MEM_EXTERNAL;
+	fparms.flow_handle	= res->resource_hndl;
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_em_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_ident_free(struct bnxt_ulp_context *ulp __rte_unused,
+		      struct tf *tfp,
+		      struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_identifier_parms fparms = {
+		.dir		= res->direction,
+		.ident_type	= res->resource_type,
+		.id		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_identifier(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_mark_free(struct bnxt_ulp_context *ulp,
+		     struct ulp_flow_db_res_params *res)
+{
+	uint32_t flag;
+	uint32_t fid;
+	uint32_t gfid;
+
+	fid	  = (uint32_t)res->resource_hndl;
+	TF_GET_FLAG_FROM_FLOW_ID(fid, flag);
+	TF_GET_GFID_FROM_FLOW_ID(fid, gfid);
+
+	return ulp_mark_db_mark_del(ulp,
+				    (flag == TF_GFID_TABLE_EXTERNAL),
+				    gfid,
+				    0);
+}
+
 static int32_t
 ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
 			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1131,3 +1212,115 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+static int32_t
+ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
+			 struct ulp_flow_db_res_params *res)
+{
+	struct tf *tfp;
+	int32_t	rc = 0;
+
+	if (!res || !ulp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource\n ");
+		return -EINVAL;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource failed to get tfp\n");
+		return -EINVAL;
+	}
+
+	switch (res->resource_func) {
+	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
+		rc = ulp_mapper_ident_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_HW_FID:
+		rc = ulp_mapper_mark_free(ulp, res);
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type)
+{
+	struct ulp_flow_db_res_params	res_parms = { 0 };
+	int32_t				rc, trc;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the critical resource on the first resource del, then iterate
+	 * while status is good
+	 */
+	res_parms.critical_resource = 1;
+	rc = ulp_flow_db_resource_del(ulp_ctx, tbl_type, fid, &res_parms);
+
+	if (rc) {
+		/*
+		 * This is unexpected on the first call to resource del.
+		 * It likely means that the flow did not exist in the flow db.
+		 */
+		BNXT_TF_DBG(ERR, "Flow[%d][0x%08x] failed to free (rc=%d)\n",
+			    tbl_type, fid, rc);
+		return rc;
+	}
+
+	while (!rc) {
+		trc = ulp_mapper_resource_free(ulp_ctx, &res_parms);
+		if (trc)
+			/*
+			 * On fail, we still need to attempt to free the
+			 * remaining resources.  Don't return
+			 */
+			BNXT_TF_DBG(ERR,
+				    "Flow[%d][0x%x] Res[%d][0x%016" PRIx64
+				    "] failed rc=%d.\n",
+				    tbl_type, fid, res_parms.resource_func,
+				    res_parms.resource_hndl, trc);
+
+		/* All subsequent call require the critical_resource be zero */
+		res_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_del(ulp_ctx,
+					      tbl_type,
+					      fid,
+					      &res_parms);
+	}
+
+	/* Free the Flow ID since we've removed all resources */
+	rc = ulp_flow_db_fid_free(ulp_ctx, tbl_type, fid);
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+{
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	return ulp_mapper_resources_free(ulp_ctx,
+					 fid,
+					 BNXT_ULP_REGULAR_FLOW_TABLE);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index adbcec2..8655728 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -15,6 +15,8 @@
 #include "bnxt_ulp.h"
 #include "ulp_utils.h"
 
+#define ULP_SZ_BITS2BYTES(x) (((x) + 7) / 8)
+
 /* Internal Structure for passing the arguments around */
 struct bnxt_ulp_mapper_parms {
 	uint32_t				dev_id;
@@ -36,4 +38,17 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/* Function that frees all resources associated with the flow. */
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+
+/*
+ * Function that frees all resources and can be called on default or regular
+ * flows
+ */
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type);
+
 #endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 837064e..566668e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -135,7 +135,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_max,
 		    mark_tbl->gfid_mask);
 
-	/* Add the mart tbl to the ulp context. */
+	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 
 	return 0;
@@ -195,3 +195,24 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 {
 	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
 }
+
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark  __rte_unused)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, ULP_MARK_INVALID);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 18abea4..f0d1515 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -72,4 +72,22 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		     uint32_t gfid,
 		     uint32_t mark);
 
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 22/34] net/bnxt: add support to alloc and program key and act tbls
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (20 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
@ 2020-04-13 19:39   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
                     ` (13 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:39 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets the action tables information from the action template id
2. Gets the class tables information from the class template id
3. Initializes the registry file
4. Allocates a flow id from the flow table
5. Process the class & action tables

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |  37 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  13 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 196 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  15 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  22 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  31 +++-
 7 files changed, 310 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 8449db3..76ec856 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -303,6 +303,43 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 }
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	*fid = 0; /* Initialize fid to invalid value */
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+	/* check for max flows */
+	if (flow_tbl->num_flows <= flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "Flow database has reached max flows\n");
+		return -ENOMEM;
+	}
+	*fid = flow_tbl->flow_tbl_stack[flow_tbl->head_index];
+	flow_tbl->head_index++;
+	ulp_flow_db_active_flow_set(flow_tbl, *fid, 1);
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index 20109b9..eb5effa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -84,6 +84,19 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid);
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 1b22720..f697f2f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -16,12 +16,6 @@
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 
-int32_t
-ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
-int32_t
-ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
 /*
  * Get the size of the action property for a given index.
  *
@@ -38,7 +32,76 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 }
 
 /*
- * Get the list of result fields that implement the flow action
+ * Get the list of result fields that implement the flow action.
+ * Gets a device dependent list of tables that implement the action template id.
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The action template id that matches the flow
+ *
+ * num_tbls [out] The number of action tables in the returned array
+ *
+ * Returns An array of action tables to implement the flow, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_act_tbl_info *
+ulp_mapper_action_tbl_list_get(uint32_t dev_id,
+			       uint32_t tid,
+			       uint32_t *num_tbls)
+{
+	uint32_t	idx;
+	uint32_t	tidx;
+
+	if (!num_tbls) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+
+	/* template shift and device mask */
+	tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and act_tid
+	 */
+	idx		= ulp_act_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_act_tmpl_list[tidx].num_tbls;
+
+	return &ulp_act_tbl_list[idx];
+}
+
+/** Get a list of classifier tables that implement the flow
+ * Gets a device dependent list of tables that implement the class template id
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The template id that matches the flow
+ *
+ * num_tbls [out] The number of classifier tables in the returned array
+ *
+ * returns An array of classifier tables to implement the flow, or NULL on
+ * error
+ */
+static struct bnxt_ulp_mapper_class_tbl_info *
+ulp_mapper_class_tbl_list_get(uint32_t dev_id,
+			      uint32_t tid,
+			      uint32_t *num_tbls)
+{
+	uint32_t idx;
+	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	if (!num_tbls)
+		return NULL;
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and tid
+	 */
+	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
+
+	return &ulp_class_tbl_list[idx];
+}
+
+/*
+ * Get the list of key fields that implement the flow.
  *
  * ctxt [in] The ulp context
  *
@@ -46,7 +109,7 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
  *
  * num_flds [out] The number of key fields in the returned array
  *
- * returns array of Key fields, or NULL on error
+ * Returns array of Key fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_class_key_field_info *
 ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1147,7 +1210,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
  */
-int32_t
+static int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1169,7 +1232,7 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 }
 
 /* Create the classifier table entries for a flow. */
-int32_t
+static int32_t
 ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1324,3 +1387,116 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 fid,
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
+
+/* Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
+		       uint32_t app_priority __rte_unused,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap __rte_unused,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act_bitmap,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t class_tid,
+		       uint32_t act_tid,
+		       uint32_t *flow_id)
+{
+	struct ulp_regfile		regfile;
+	struct bnxt_ulp_mapper_parms	parms;
+	struct bnxt_ulp_device_params	*device_params;
+	int32_t				rc, trc;
+
+	/* Initialize the parms structure */
+	memset(&parms, 0, sizeof(parms));
+	parms.act_prop = act_prop;
+	parms.act_bitmap = act_bitmap;
+	parms.regfile = &regfile;
+	parms.hdr_field = hdr_field;
+	parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	parms.ulp_ctx = ulp_ctx;
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	/* Get the action table entry from device id and act context id */
+	parms.act_tid = act_tid;
+	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+						     parms.act_tid,
+						     &parms.num_atbls);
+	if (!parms.atbls || !parms.num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		return -EINVAL;
+	}
+
+	/* Get the class table entry from device id and act context id */
+	parms.class_tid = class_tid;
+	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+						    parms.class_tid,
+						    &parms.num_ctbls);
+	if (!parms.ctbls || !parms.num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+
+	/* Get the byte order for the further processing from device params */
+	device_params = bnxt_ulp_device_params_get(parms.dev_id);
+	if (!device_params) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+	parms.order = device_params->byte_order;
+	parms.encap_byte_swap = device_params->encap_byte_swap;
+
+	/* initialize the registry file for further processing */
+	if (!ulp_regfile_init(parms.regfile)) {
+		BNXT_TF_DBG(ERR, "regfile initialization failed.\n");
+		return -EINVAL;
+	}
+
+	/* Allocate a Flow ID for attaching all resources for the flow to.
+	 * Once allocated, all errors have to walk the list of resources and
+	 * free each of them.
+	 */
+	rc = ulp_flow_db_fid_alloc(ulp_ctx,
+				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   &parms.fid);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n");
+		return rc;
+	}
+
+	/* Process the action template list from the selected action table*/
+	rc = ulp_mapper_action_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		goto flow_error;
+	}
+
+	/* All good. Now process the class template */
+	rc = ulp_mapper_class_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "class tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		goto flow_error;
+	}
+
+	*flow_id = parms.fid;
+
+	return rc;
+
+flow_error:
+	/* Free all resources that were allocated during flow creation */
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 8655728..5f3d46e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -38,6 +38,21 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/*
+ * Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
+		       uint32_t		app_priority,
+		       struct ulp_rte_hdr_bitmap  *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t		class_tid,
+		       uint32_t		act_tid,
+		       uint32_t		*flow_id);
+
 /* Function that frees all resources associated with the flow. */
 int32_t
 ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index ba06493..5ec7adc 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -1004,6 +1004,28 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_act_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.table_type = TF_TBL_TYPE_EXT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 733836a..957b21a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -12,6 +12,7 @@
 #define ULP_TEMPLATE_DB_H_
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
+#define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -127,6 +128,12 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_search_before_alloc {
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_NO = 0,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_YES = 1,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_LAST = 2
+};
+
 enum bnxt_ulp_spec_opc {
 	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index e28d049..b7094c5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,10 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+struct ulp_rte_hdr_bitmap {
+	uint64_t	bits;
+};
+
 /* Structure to store the protocol fields */
 #define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
 struct ulp_rte_hdr_field {
@@ -51,6 +55,13 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+/* Flow Mapper */
+struct bnxt_ulp_mapper_tbl_list_info {
+	uint32_t	device_name;
+	uint32_t	start_tbl_idx;
+	uint32_t	num_tbls;
+};
+
 struct bnxt_ulp_mapper_class_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t	table_type;
@@ -132,7 +143,25 @@ struct bnxt_ulp_mapper_ident_info {
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
- * The ulp_data_field_list provides the instructions for creating an action
+ * The ulp_class_tmpl_list and ulp_act_tmpl_list are indexed by the dev_id
+ * and template id (either class or action) returned by the matcher.
+ * The result provides the start index and number of entries in the connected
+ * ulp_class_tbl_list/ulp_act_tbl_list.
+ */
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_class_tmpl_list[];
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_act_tmpl_list[];
+
+/*
+ * The ulp_class_tbl_list and ulp_act_tbl_list are indexed based on the results
+ * of the template lists.  Each entry describes the high level details of the
+ * table entry to include the start index and number of instructions in the
+ * field lists.
+ */
+extern struct bnxt_ulp_mapper_class_tbl_info	ulp_class_tbl_list[];
+extern struct bnxt_ulp_mapper_act_tbl_info	ulp_act_tbl_list[];
+
+/*
+ * The ulp_class_result_field_list provides the instructions for creating result
  * records such as tcam/em results.
  */
 extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 23/34] net/bnxt: match rte flow items with flow template patterns
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (21 preceding siblings ...)
  2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
                     ` (12 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes hdr_bitmap generated from the rte_flow_items
2. Iterates through the static hdr_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  12 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 152 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  26 +++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 115 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  40 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  21 ++++
 7 files changed, 367 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index f464d9e..455fd5c 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index 3516df4..e4ebfc5 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -25,6 +25,18 @@
 #define	BNXT_ULP_TX_NUM_FLOWS			32
 #define	BNXT_ULP_TX_TBL_IF_ID			0
 
+enum bnxt_tf_rc {
+	BNXT_TF_RC_PARSE_ERR	= -2,
+	BNXT_TF_RC_ERROR	= -1,
+	BNXT_TF_RC_SUCCESS	= 0
+};
+
+/* ulp direction Type */
+enum ulp_direction_type {
+	ULP_DIR_INGRESS,
+	ULP_DIR_EGRESS,
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
new file mode 100644
index 0000000..f367e4c
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_matcher.h"
+#include "ulp_utils.h"
+
+/* Utility function to check if bitmap is zero */
+static inline
+int ulp_field_mask_is_zero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is all ones */
+static inline int
+ulp_field_mask_is_ones(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0xFF)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is non zero */
+static inline int
+ulp_field_mask_notzero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 1;
+		bitmap++;
+	}
+	return 0;
+}
+
+/* Utility function to mask the computed and internal proto headers. */
+static void
+ulp_matcher_hdr_fields_normalize(struct ulp_rte_hdr_bitmap *hdr1,
+				 struct ulp_rte_hdr_bitmap *hdr2)
+{
+	/* copy the contents first */
+	rte_memcpy(hdr2, hdr1, sizeof(struct ulp_rte_hdr_bitmap));
+
+	/* reset the computed fields */
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_SVIF);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L4);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L4);
+}
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type   dir,
+			  struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			  struct ulp_rte_hdr_field  *hdr_field,
+			  struct ulp_rte_act_bitmap *act_bitmap,
+			  uint32_t		    *class_id)
+{
+	struct bnxt_ulp_header_match_info	*sel_hdr_match;
+	uint32_t				hdr_num, idx, jdx;
+	uint32_t				match = 0;
+	struct ulp_rte_hdr_bitmap		hdr_bitmap_masked;
+	uint32_t				start_idx;
+	struct ulp_rte_hdr_field		*m_field;
+	struct bnxt_ulp_matcher_field_info	*sf;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_hdr_match = ulp_ingress_hdr_match_list;
+		hdr_num = BNXT_ULP_INGRESS_HDR_MATCH_SZ;
+	} else {
+		sel_hdr_match = ulp_egress_hdr_match_list;
+		hdr_num = BNXT_ULP_EGRESS_HDR_MATCH_SZ;
+	}
+
+	/* Remove the hdr bit maps that are internal or computed */
+	ulp_matcher_hdr_fields_normalize(hdr_bitmap, &hdr_bitmap_masked);
+
+	/* Loop through the list of class templates to find the match */
+	for (idx = 0; idx < hdr_num; idx++, sel_hdr_match++) {
+		if (ULP_BITSET_CMP(&sel_hdr_match->hdr_bitmap,
+				   &hdr_bitmap_masked)) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Pattern Match failed template=%d\n",
+				    idx);
+			continue;
+		}
+		match = ULP_BITMAP_ISSET(act_bitmap->bits,
+					 BNXT_ULP_ACTION_BIT_VNIC);
+		if (match != sel_hdr_match->act_vnic) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Vnic Match failed template=%d\n",
+				    idx);
+			continue;
+		} else {
+			match = 1;
+		}
+
+		/* Found a matching hdr bitmap, match the fields next */
+		start_idx = sel_hdr_match->start_idx;
+		for (jdx = 0; jdx < sel_hdr_match->num_entries; jdx++) {
+			m_field = &hdr_field[jdx + BNXT_ULP_HDR_FIELD_LAST - 1];
+			sf = &ulp_field_match[start_idx + jdx];
+			switch (sf->mask_opcode) {
+			case BNXT_ULP_FMF_MASK_ANY:
+				match &= ulp_field_mask_is_zero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_EXACT:
+				match &= ulp_field_mask_is_ones(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_WILDCARD:
+				match &= ulp_field_mask_notzero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_IGNORE:
+			default:
+				break;
+			}
+			if (!match)
+				break;
+		}
+		if (match) {
+			BNXT_TF_DBG(DEBUG,
+				    "Found matching pattern template %d\n",
+				    sel_hdr_match->class_tmpl_id);
+			*class_id = sel_hdr_match->class_tmpl_id;
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching template\n");
+	*class_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
new file mode 100644
index 0000000..57a161d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef ULP_MATCHER_H_
+#define ULP_MATCHER_H_
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
+			  struct ulp_rte_hdr_bitmap	   *hdr_bitmap,
+			  struct ulp_rte_hdr_field	   *hdr_field,
+			  struct ulp_rte_act_bitmap	   *act_bitmap,
+			  uint32_t			   *class_id);
+
+#endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 5ec7adc..68a2dc0 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -796,6 +796,121 @@ struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
 	}
 };
 
+struct bnxt_ulp_header_match_info ulp_ingress_hdr_match_list[] = {
+	{
+	.hdr_bitmap = { .bits =
+		BNXT_ULP_HDR_BIT_O_ETH |
+		BNXT_ULP_HDR_BIT_O_IPV4 |
+		BNXT_ULP_HDR_BIT_O_UDP },
+	.start_idx = 0,
+	.num_entries = 24,
+	.class_tmpl_id = 0,
+	.act_vnic = 0
+	}
+};
+
+struct bnxt_ulp_header_match_info ulp_egress_hdr_match_list[] = {
+};
+
+struct bnxt_ulp_matcher_field_info ulp_field_match[] = {
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	{
 	.field_bit_size = 10,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 957b21a..319500a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,8 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
+#define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -45,6 +47,31 @@ enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
 };
 
+enum bnxt_ulp_hdr_bit {
+	BNXT_ULP_HDR_BIT_SVIF                = 0x0000000000000001,
+	BNXT_ULP_HDR_BIT_O_ETH               = 0x0000000000000002,
+	BNXT_ULP_HDR_BIT_OO_VLAN             = 0x0000000000000004,
+	BNXT_ULP_HDR_BIT_OI_VLAN             = 0x0000000000000008,
+	BNXT_ULP_HDR_BIT_O_L3                = 0x0000000000000010,
+	BNXT_ULP_HDR_BIT_O_IPV4              = 0x0000000000000020,
+	BNXT_ULP_HDR_BIT_O_IPV6              = 0x0000000000000040,
+	BNXT_ULP_HDR_BIT_O_L4                = 0x0000000000000080,
+	BNXT_ULP_HDR_BIT_O_TCP               = 0x0000000000000100,
+	BNXT_ULP_HDR_BIT_O_UDP               = 0x0000000000000200,
+	BNXT_ULP_HDR_BIT_T_VXLAN             = 0x0000000000000400,
+	BNXT_ULP_HDR_BIT_T_GRE               = 0x0000000000000800,
+	BNXT_ULP_HDR_BIT_I_ETH               = 0x0000000000001000,
+	BNXT_ULP_HDR_BIT_IO_VLAN             = 0x0000000000002000,
+	BNXT_ULP_HDR_BIT_II_VLAN             = 0x0000000000004000,
+	BNXT_ULP_HDR_BIT_I_L3                = 0x0000000000008000,
+	BNXT_ULP_HDR_BIT_I_IPV4              = 0x0000000000010000,
+	BNXT_ULP_HDR_BIT_I_IPV6              = 0x0000000000020000,
+	BNXT_ULP_HDR_BIT_I_L4                = 0x0000000000040000,
+	BNXT_ULP_HDR_BIT_I_TCP               = 0x0000000000080000,
+	BNXT_ULP_HDR_BIT_I_UDP               = 0x0000000000100000,
+	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -67,12 +94,25 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_fmf_spec {
+	BNXT_ULP_FMF_SPEC_IGNORE = 0,
+	BNXT_ULP_FMF_SPEC_LAST = 1
+};
+
 enum bnxt_ulp_mark_enable {
 	BNXT_ULP_MARK_ENABLE_NO = 0,
 	BNXT_ULP_MARK_ENABLE_YES = 1,
 	BNXT_ULP_MARK_ENABLE_LAST = 2
 };
 
+enum bnxt_ulp_hdr_field {
+	BNXT_ULP_HDR_FIELD_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HDR_FIELD_O_VTAG_NUM = 1,
+	BNXT_ULP_HDR_FIELD_I_VTAG_NUM = 2,
+	BNXT_ULP_HDR_FIELD_SVIF_INDEX = 3,
+	BNXT_ULP_HDR_FIELD_LAST = 4
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index b7094c5..dd06fb1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -29,6 +29,11 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+struct bnxt_ulp_matcher_field_info {
+	enum bnxt_ulp_fmf_mask	mask_opcode;
+	enum bnxt_ulp_fmf_spec	spec_opcode;
+};
+
 struct ulp_rte_act_bitmap {
 	uint64_t	bits;
 };
@@ -41,6 +46,22 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Matcher structures */
+struct bnxt_ulp_header_match_info {
+	struct ulp_rte_hdr_bitmap		hdr_bitmap;
+	uint32_t				start_idx;
+	uint32_t				num_entries;
+	uint32_t				class_tmpl_id;
+	uint32_t				act_vnic;
+};
+
+/* Flow Matcher templates Structure Array defined in template source*/
+extern struct bnxt_ulp_header_match_info  ulp_ingress_hdr_match_list[];
+extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
+
+/* Flow field match Information Structure Array defined in template source*/
+extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 24/34] net/bnxt: match rte flow actions with flow template actions
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (22 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
                     ` (11 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes act_bitmap generated from the rte_flow_actions
2. Iterates through the static act_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 36 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  9 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 12 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  2 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 10 ++++++++
 5 files changed, 69 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
index f367e4c..040d08d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -150,3 +150,39 @@ ulp_matcher_pattern_match(enum ulp_direction_type   dir,
 	*class_id = 0;
 	return BNXT_TF_RC_ERROR;
 }
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type		dir,
+			 struct ulp_rte_act_bitmap		*act_bitmap,
+			 uint32_t				*act_id)
+{
+	struct bnxt_ulp_action_match_info	*sel_act_match;
+	uint32_t				act_num, idx;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_act_match = ulp_ingress_act_match_list;
+		act_num = BNXT_ULP_INGRESS_ACT_MATCH_SZ;
+	} else {
+		sel_act_match = ulp_egress_act_match_list;
+		act_num = BNXT_ULP_EGRESS_ACT_MATCH_SZ;
+	}
+
+	/* Loop through the list of action templates to find the match */
+	for (idx = 0; idx < act_num; idx++, sel_act_match++) {
+		if (!ULP_BITSET_CMP(&sel_act_match->act_bitmap,
+				    act_bitmap)) {
+			*act_id = sel_act_match->act_tmpl_id;
+			BNXT_TF_DBG(DEBUG, "Found matching action template %u\n",
+				    *act_id);
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching action template\n");
+	*act_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
index 57a161d..c818bbe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -23,4 +23,13 @@ ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
 			  struct ulp_rte_act_bitmap	   *act_bitmap,
 			  uint32_t			   *class_id);
 
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type	dir,
+			 struct ulp_rte_act_bitmap	*act_bitmap,
+			 uint32_t			*act_id);
+
 #endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 68a2dc0..5981c74 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -1119,6 +1119,18 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_action_match_info ulp_ingress_act_match_list[] = {
+	{
+	.act_bitmap = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS },
+	.act_tmpl_id = 0
+	}
+};
+
+struct bnxt_ulp_action_match_info ulp_egress_act_match_list[] = {
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 319500a..f4850bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -15,6 +15,8 @@
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
 #define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
+#define BNXT_ULP_INGRESS_ACT_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_ACT_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index dd06fb1..0e811ec 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -62,6 +62,16 @@ extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
 /* Flow field match Information Structure Array defined in template source*/
 extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
 
+/* Flow Matcher Action structures */
+struct bnxt_ulp_action_match_info {
+	struct ulp_rte_act_bitmap		act_bitmap;
+	uint32_t				act_tmpl_id;
+};
+
+/* Flow Matcher templates Structure Array defined in template source */
+extern struct bnxt_ulp_action_match_info  ulp_ingress_act_match_list[];
+extern struct bnxt_ulp_action_match_info  ulp_egress_act_match_list[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 25/34] net/bnxt: add support for rte flow item parsing
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (23 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
                     ` (10 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

1. Registers a callback handler for each rte_flow_item type, if it
   is supported
2. Iterates through each rte_flow_item till RTE_FLOW_ITEM_TYPE_END
3. Invokes the header call back handler
4. Each header call back handler will populate the respective fields
   in hdr_field & hdr_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 767 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      | 120 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 197 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  26 +
 6 files changed, 1118 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 455fd5c..5e2d751 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -64,6 +64,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_rte_parser.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
new file mode 100644
index 0000000..3ffdcbd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -0,0 +1,767 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_utils.h"
+#include "tfp.h"
+
+/* Inline Func to read integer that is stored in big endian format */
+static inline void ulp_util_field_int_read(uint8_t *buffer,
+					   uint32_t *val)
+{
+	uint32_t temp_val;
+
+	memcpy(&temp_val, buffer, sizeof(uint32_t));
+	*val = rte_be_to_cpu_32(temp_val);
+}
+
+/* Inline Func to write integer that is stored in big endian format */
+static inline void ulp_util_field_int_write(uint8_t *buffer,
+					    uint32_t val)
+{
+	uint32_t temp_val = rte_cpu_to_be_32(val);
+
+	memcpy(buffer, &temp_val, sizeof(uint32_t));
+}
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field *hdr_field)
+{
+	const struct rte_flow_item *item = pattern;
+	uint32_t field_idx = BNXT_ULP_HDR_FIELD_LAST;
+	uint32_t vlan_idx = 0;
+	struct bnxt_ulp_rte_hdr_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (item && item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_hdr_info[item->type];
+		if (hdr_info->hdr_type ==
+		    BNXT_ULP_HDR_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support type %d\n",
+				    item->type);
+			return BNXT_TF_RC_PARSE_ERR;
+		} else if (hdr_info->hdr_type ==
+			   BNXT_ULP_HDR_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_hdr_func) {
+				if (hdr_info->proto_hdr_func(item,
+							     hdr_bitmap,
+							     hdr_field,
+							     &field_idx,
+							     &vlan_idx) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+static int32_t
+ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			enum rte_flow_item_type proto,
+			uint32_t svif,
+			uint32_t mask)
+{
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF)) {
+		BNXT_TF_DBG(ERR,
+			    "SVIF already set,"
+			    " multiple sources not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* TBD: Check for any mapping errors for svif */
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_SVIF. */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	if (proto != RTE_FLOW_ITEM_TYPE_PF) {
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec,
+		       &svif, sizeof(svif));
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask,
+		       &mask, sizeof(mask));
+		hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	}
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, 0, 0);
+}
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field	 *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vf *vf_spec, *vf_mask;
+	uint32_t svif = 0, mask = 0;
+
+	vf_spec = item->spec;
+	vf_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields.
+	 */
+	if (vf_spec)
+		svif = vf_spec->id;
+	if (vf_mask)
+		mask = vf_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item port id  Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item *item,
+			    struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			    struct ulp_rte_hdr_field *hdr_field,
+			    uint32_t *field_idx __rte_unused,
+			    uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_port_id *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for Port into hdr_field using port id
+	 * header fields.
+	 */
+	if (port_spec)
+		svif = port_spec->id;
+	if (port_mask)
+		mask = port_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item phy port Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
+			     struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			     struct ulp_rte_hdr_field *hdr_field,
+			     uint32_t *field_idx __rte_unused,
+			     uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_phy_port *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/* Copy the rte_flow_item for phy port into hdr_field */
+	if (port_spec)
+		svif = port_spec->index;
+	if (port_mask)
+		mask = port_mask->index;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+	uint64_t set_flag = 0;
+
+	eth_spec = item->spec;
+	eth_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields
+	 */
+	if (eth_spec) {
+		hdr_field[idx].size = sizeof(eth_spec->dst.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->dst.addr_bytes,
+		       sizeof(eth_spec->dst.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->src.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->src.addr_bytes,
+		       sizeof(eth_spec->src.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->type);
+		memcpy(hdr_field[idx++].spec, &eth_spec->type,
+		       sizeof(eth_spec->type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_ETH_NUM;
+	}
+
+	if (eth_mask) {
+		memcpy(hdr_field[mdx++].mask, eth_mask->dst.addr_bytes,
+		       sizeof(eth_mask->dst.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, eth_mask->src.addr_bytes,
+		       sizeof(eth_mask->src.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, &eth_mask->type,
+		       sizeof(eth_mask->type));
+	}
+	/* Add number of vlan header elements */
+	*field_idx = idx + BNXT_ULP_PROTO_HDR_VLAN_NUM;
+	*vlan_idx = idx;
+
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_I_ETH */
+	set_flag = ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+	if (set_flag)
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+	else
+		ULP_BITMAP_RESET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+
+	/* update the hdr_bitmap with BNXT_ULP_HDR_PROTO_O_ETH */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	uint32_t idx = *vlan_idx;
+	uint32_t mdx = *vlan_idx;
+	uint16_t vlan_tag, priority;
+	uint32_t outer_vtag_num = 0, inner_vtag_num = 0;
+	uint8_t *outer_tag_buffer;
+	uint8_t *inner_tag_buffer;
+
+	vlan_spec = item->spec;
+	vlan_mask = item->mask;
+	outer_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].spec;
+	inner_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].spec;
+
+	/*
+	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
+	 * header fields
+	 */
+	if (vlan_spec) {
+		vlan_tag = ntohs(vlan_spec->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		hdr_field[idx].size = sizeof(priority);
+		memcpy(hdr_field[idx++].spec, &priority, sizeof(priority));
+		hdr_field[idx].size = sizeof(vlan_tag);
+		memcpy(hdr_field[idx++].spec, &vlan_tag, sizeof(vlan_tag));
+		hdr_field[idx].size = sizeof(vlan_spec->inner_type);
+		memcpy(hdr_field[idx++].spec, &vlan_spec->inner_type,
+		       sizeof(vlan_spec->inner_type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_S_VLAN_NUM;
+	}
+
+	if (vlan_mask) {
+		vlan_tag = ntohs(vlan_mask->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		memcpy(hdr_field[mdx++].mask, &priority, sizeof(priority));
+		memcpy(hdr_field[mdx++].mask, &vlan_tag, sizeof(vlan_tag));
+		memcpy(hdr_field[mdx++].mask, &vlan_mask->inner_type,
+		       sizeof(vlan_mask->inner_type));
+	}
+	/* Set the vlan index to new incremented value */
+	*vlan_idx = idx;
+
+	/* Get the outer tag and inner tag counts */
+	ulp_util_field_int_read(outer_tag_buffer, &outer_vtag_num);
+	ulp_util_field_int_read(inner_tag_buffer, &inner_vtag_num);
+
+	/* Update the hdr_bitmap of the vlans */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+	    !ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_OI_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_IO_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_IO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_II_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else {
+		BNXT_TF_DBG(ERR, "Error Parsing:Vlan hdr found withtout eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv4_spec = item->spec;
+	ipv4_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv4_spec) {
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.version_ihl);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.version_ihl,
+		       sizeof(ipv4_spec->hdr.version_ihl));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.type_of_service);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.type_of_service,
+		       sizeof(ipv4_spec->hdr.type_of_service));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.total_length);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.total_length,
+		       sizeof(ipv4_spec->hdr.total_length));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.packet_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.packet_id,
+		       sizeof(ipv4_spec->hdr.packet_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.fragment_offset);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.fragment_offset,
+		       sizeof(ipv4_spec->hdr.fragment_offset));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.time_to_live);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.time_to_live,
+		       sizeof(ipv4_spec->hdr.time_to_live));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.next_proto_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.next_proto_id,
+		       sizeof(ipv4_spec->hdr.next_proto_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.hdr_checksum);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.hdr_checksum,
+		       sizeof(ipv4_spec->hdr.hdr_checksum));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.src_addr,
+		       sizeof(ipv4_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.dst_addr,
+		       sizeof(ipv4_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV4_NUM;
+	}
+
+	if (ipv4_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.version_ihl,
+		       sizeof(ipv4_mask->hdr.version_ihl));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.type_of_service,
+		       sizeof(ipv4_mask->hdr.type_of_service));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.total_length,
+		       sizeof(ipv4_mask->hdr.total_length));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.packet_id,
+		       sizeof(ipv4_mask->hdr.packet_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.fragment_offset,
+		       sizeof(ipv4_mask->hdr.fragment_offset));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.time_to_live,
+		       sizeof(ipv4_mask->hdr.time_to_live));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.next_proto_id,
+		       sizeof(ipv4_mask->hdr.next_proto_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.hdr_checksum,
+		       sizeof(ipv4_mask->hdr.hdr_checksum));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.src_addr,
+		       sizeof(ipv4_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.dst_addr,
+		       sizeof(ipv4_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* Number of ipv4 header elements */
+
+	/* Set the ipv4 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv6_spec = item->spec;
+	ipv6_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error: 3'rd L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv6_spec) {
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.vtc_flow);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.vtc_flow,
+		       sizeof(ipv6_spec->hdr.vtc_flow));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.payload_len);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.payload_len,
+		       sizeof(ipv6_spec->hdr.payload_len));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.proto);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.proto,
+		       sizeof(ipv6_spec->hdr.proto));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.hop_limits);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.hop_limits,
+		       sizeof(ipv6_spec->hdr.hop_limits));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.src_addr,
+		       sizeof(ipv6_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.dst_addr,
+		       sizeof(ipv6_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV6_NUM;
+	}
+
+	if (ipv6_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.vtc_flow,
+		       sizeof(ipv6_mask->hdr.vtc_flow));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.payload_len,
+		       sizeof(ipv6_mask->hdr.payload_len));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.proto,
+		       sizeof(ipv6_mask->hdr.proto));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.hop_limits,
+		       sizeof(ipv6_mask->hdr.hop_limits));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.src_addr,
+		       sizeof(ipv6_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.dst_addr,
+		       sizeof(ipv6_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* add number of ipv6 header elements */
+
+	/* Set the ipv6 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	udp_spec = item->spec;
+	udp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Err:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (udp_spec) {
+		hdr_field[idx].size = sizeof(udp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.src_port,
+		       sizeof(udp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dst_port,
+		       sizeof(udp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_len);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_len,
+		       sizeof(udp_spec->hdr.dgram_len));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_cksum);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_cksum,
+		       sizeof(udp_spec->hdr.dgram_cksum));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_UDP_NUM;
+	}
+
+	if (udp_mask) {
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.src_port,
+		       sizeof(udp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dst_port,
+		       sizeof(udp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_len,
+		       sizeof(udp_mask->hdr.dgram_len));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_cksum,
+		       sizeof(udp_mask->hdr.dgram_cksum));
+	}
+	*field_idx = idx; /* Add number of UDP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	tcp_spec = item->spec;
+	tcp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (tcp_spec) {
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.src_port,
+		       sizeof(tcp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.dst_port,
+		       sizeof(tcp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.sent_seq);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.sent_seq,
+		       sizeof(tcp_spec->hdr.sent_seq));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.recv_ack);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.recv_ack,
+		       sizeof(tcp_spec->hdr.recv_ack));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.data_off);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.data_off,
+		       sizeof(tcp_spec->hdr.data_off));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_flags);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_flags,
+		       sizeof(tcp_spec->hdr.tcp_flags));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.rx_win);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.rx_win,
+		       sizeof(tcp_spec->hdr.rx_win));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.cksum);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.cksum,
+		       sizeof(tcp_spec->hdr.cksum));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_urp);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_urp,
+		       sizeof(tcp_spec->hdr.tcp_urp));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_TCP_NUM;
+	}
+
+	if (tcp_mask) {
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.src_port,
+		       sizeof(tcp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.dst_port,
+		       sizeof(tcp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.sent_seq,
+		       sizeof(tcp_mask->hdr.sent_seq));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.recv_ack,
+		       sizeof(tcp_mask->hdr.recv_ack));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.data_off,
+		       sizeof(tcp_mask->hdr.data_off));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_flags,
+		       sizeof(tcp_mask->hdr.tcp_flags));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.rx_win,
+		       sizeof(tcp_mask->hdr.rx_win));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.cksum,
+		       sizeof(tcp_mask->hdr.cksum));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_urp,
+		       sizeof(tcp_mask->hdr.tcp_urp));
+	}
+	*field_idx = idx; /* add number of TCP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
+			  struct ulp_rte_hdr_bitmap *hdrbitmap,
+			  struct ulp_rte_hdr_field *hdr_field,
+			  uint32_t *field_idx,
+			  uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	vxlan_spec = item->spec;
+	vxlan_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
+	 * header fields
+	 */
+	if (vxlan_spec) {
+		hdr_field[idx].size = sizeof(vxlan_spec->flags);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->flags,
+		       sizeof(vxlan_spec->flags));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd0);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd0,
+		       sizeof(vxlan_spec->rsvd0));
+		hdr_field[idx].size = sizeof(vxlan_spec->vni);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->vni,
+		       sizeof(vxlan_spec->vni));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd1);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd1,
+		       sizeof(vxlan_spec->rsvd1));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_VXLAN_NUM;
+	}
+
+	if (vxlan_mask) {
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->flags,
+		       sizeof(vxlan_mask->flags));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd0,
+		       sizeof(vxlan_mask->rsvd0));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->vni,
+		       sizeof(vxlan_mask->vni));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd1,
+		       sizeof(vxlan_mask->rsvd1));
+	}
+	*field_idx = idx; /* Add number of vxlan header elements */
+
+	/* Update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(hdrbitmap->bits, BNXT_ULP_HDR_BIT_T_VXLAN);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item void Header */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
+			 struct ulp_rte_hdr_bitmap *hdr_bit __rte_unused,
+			 struct ulp_rte_hdr_field *hdr_field __rte_unused,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
new file mode 100644
index 0000000..3a7845d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_RTE_PARSER_H_
+#define _ULP_RTE_PARSER_H_
+
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field  *hdr_field);
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
+			    struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			    struct ulp_rte_hdr_field	*hdr_field,
+			    uint32_t			*field_idx,
+			    uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
+			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			     struct ulp_rte_hdr_field	*hdr_field,
+			     uint32_t			*field_idx,
+			     uint32_t			*vlan_idx);
+
+/* Function to handle the RTE item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header. */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item	*item,
+			  struct ulp_rte_hdr_bitmap	*hdrbitmap,
+			  struct ulp_rte_hdr_field	*hdr_field,
+			  uint32_t			*field_idx,
+			  uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item void Header. */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+#endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 5981c74..1d15563 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,8 @@
 #include "ulp_template_db.h"
 #include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
+#include "ulp_rte_parser.h"
+
 uint32_t ulp_act_prop_map_table[] = {
 	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
 		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ,
@@ -108,6 +110,201 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
+	[RTE_FLOW_ITEM_TYPE_END] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_END,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VOID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_void_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_INVERT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ANY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_pf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PHY_PORT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_phy_port_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PORT_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_port_id_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_RAW] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_eth_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv4_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv6_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_UDP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_udp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_TCP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_tcp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_SCTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vxlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_E_TAG] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NVGRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MPLS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_FUZZY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPU] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ESP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6_EXT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NA] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MARK] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_META] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP_PSC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOES] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOED] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NSH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IGMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_AH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_HIGIG2] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	}
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index f4850bf..906b542 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -115,6 +115,13 @@ enum bnxt_ulp_hdr_field {
 	BNXT_ULP_HDR_FIELD_LAST = 4
 };
 
+enum bnxt_ulp_hdr_type {
+	BNXT_ULP_HDR_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_HDR_TYPE_SUPPORTED = 1,
+	BNXT_ULP_HDR_TYPE_END = 2,
+	BNXT_ULP_HDR_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0e811ec..0699634 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,18 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Number of fields for each protocol */
+#define BNXT_ULP_PROTO_HDR_SVIF_NUM	1
+#define BNXT_ULP_PROTO_HDR_ETH_NUM	3
+#define BNXT_ULP_PROTO_HDR_S_VLAN_NUM	3
+#define BNXT_ULP_PROTO_HDR_VLAN_NUM	6
+#define BNXT_ULP_PROTO_HDR_IPV4_NUM	10
+#define BNXT_ULP_PROTO_HDR_IPV6_NUM	6
+#define BNXT_ULP_PROTO_HDR_UDP_NUM	4
+#define BNXT_ULP_PROTO_HDR_TCP_NUM	9
+#define BNXT_ULP_PROTO_HDR_VXLAN_NUM	4
+#define BNXT_ULP_PROTO_HDR_MAX		128
+
 struct ulp_rte_hdr_bitmap {
 	uint64_t	bits;
 };
@@ -29,6 +41,20 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+/* Flow Parser Header Information Structure */
+struct bnxt_ulp_rte_hdr_info {
+	enum bnxt_ulp_hdr_type					hdr_type;
+	/* Flow Parser Protocol Header Function Prototype */
+	int (*proto_hdr_func)(const struct rte_flow_item	*item_list,
+			      struct ulp_rte_hdr_bitmap		*hdr_bitmap,
+			      struct ulp_rte_hdr_field		*hdr_field,
+			      uint32_t				*field_idx,
+			      uint32_t				*vlan_idx);
+};
+
+/* Flow Parser Header Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_hdr_info	ulp_hdr_info[];
+
 struct bnxt_ulp_matcher_field_info {
 	enum bnxt_ulp_fmf_mask	mask_opcode;
 	enum bnxt_ulp_fmf_spec	spec_opcode;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 26/34] net/bnxt: add support for rte flow action parsing
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (24 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
                     ` (9 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Registers a callback handler for each rte_flow_action type, if
   it is supported
2. Iterates through each rte_flow_action till RTE_FLOW_ACTION_TYPE_END
3. Invokes the action call back handler
4. Each action call back handler will populate the respective fields in
   act_details & act_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   7 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 441 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      |  85 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 199 ++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  13 +
 6 files changed, 751 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index e4ebfc5..f417579 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -31,6 +31,13 @@ enum bnxt_tf_rc {
 	BNXT_TF_RC_SUCCESS	= 0
 };
 
+/* eth IPv4 Type */
+enum bnxt_ulp_eth_ip_type {
+	BNXT_ULP_ETH_IPV4 = 4,
+	BNXT_ULP_ETH_IPV6 = 5,
+	BNXT_ULP_MAX_ETH_IP_TYPE = 0
+};
+
 /* ulp direction Type */
 enum ulp_direction_type {
 	ULP_DIR_INGRESS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 3ffdcbd..7a31b43 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -30,6 +30,21 @@ static inline void ulp_util_field_int_write(uint8_t *buffer,
 	memcpy(buffer, &temp_val, sizeof(uint32_t));
 }
 
+/* Utility function to skip the void items. */
+static inline int32_t
+ulp_rte_item_skip_void(const struct rte_flow_item **item, uint32_t increment)
+{
+	if (!*item)
+		return 0;
+	if (increment)
+		(*item)++;
+	while ((*item) && (*item)->type == RTE_FLOW_ITEM_TYPE_VOID)
+		(*item)++;
+	if (*item)
+		return 1;
+	return 0;
+}
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -73,6 +88,45 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 	return BNXT_TF_RC_SUCCESS;
 }
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
+			      struct ulp_rte_act_bitmap *act_bitmap,
+			      struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action *action_item = actions;
+	struct bnxt_ulp_rte_act_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (action_item && action_item->type != RTE_FLOW_ACTION_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_act_info[action_item->type];
+		if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support act %u\n",
+				    action_item->type);
+			return BNXT_TF_RC_ERROR;
+		} else if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_act_func) {
+				if (hdr_info->proto_act_func(action_item,
+							     act_bitmap,
+							     act_prop) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		action_item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 static int32_t
 ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
@@ -765,3 +819,390 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
 {
 	return BNXT_TF_RC_SUCCESS;
 }
+
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act __rte_unused,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action *action_item,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_mark *mark;
+	uint32_t mark_id = 0;
+
+	mark = action_item->conf;
+	if (mark) {
+		mark_id = tfp_cpu_to_be_32(mark->id);
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       &mark_id, BNXT_ULP_ACT_PROP_SZ_MARK);
+
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_MARK);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Mark arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action *action_item,
+			struct ulp_rte_act_bitmap *act,
+			struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	const struct rte_flow_action_rss *rss;
+
+	rss = action_item->conf;
+	if (rss) {
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_RSS);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: RSS arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *ap)
+{
+	const struct rte_flow_action_vxlan_encap *vxlan_encap;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv6 *ipv6_spec;
+	struct rte_flow_item_vxlan vxlan_spec;
+	uint32_t vlan_num = 0, vlan_size = 0;
+	uint32_t ip_size = 0, ip_type = 0;
+	uint32_t vxlan_size = 0;
+	uint8_t *buff;
+	/* IP header per byte - ver/hlen, TOS, ID, ID, FRAG, FRAG, TTL, PROTO */
+	const uint8_t	def_ipv4_hdr[] = {0x45, 0x00, 0x00, 0x01, 0x00,
+				    0x00, 0x40, 0x11};
+
+	vxlan_encap = action_item->conf;
+	if (!vxlan_encap) {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan_encap arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	item = vxlan_encap->definition;
+	if (!item) {
+		BNXT_TF_DBG(ERR, "Parse Error: definition arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!ulp_rte_item_skip_void(&item, 0))
+		return BNXT_TF_RC_ERROR;
+
+	/* must have ethernet header */
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		BNXT_TF_DBG(ERR, "Parse Error:vxlan encap does not have eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	eth_spec = item->spec;
+	buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC];
+	ulp_encap_buffer_copy(buff,
+			      eth_spec->dst.addr_bytes,
+			      BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC);
+
+	/* Goto the next item */
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* May have vlan header */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG];
+		ulp_encap_buffer_copy(buff,
+				      item->spec,
+				      sizeof(struct rte_flow_item_vlan));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+
+	/* may have two vlan headers */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG +
+		       sizeof(struct rte_flow_item_vlan)],
+		       item->spec,
+		       sizeof(struct rte_flow_item_vlan));
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+	/* Update the vlan count and size of more than one */
+	if (vlan_num) {
+		vlan_size = vlan_num * sizeof(struct rte_flow_item_vlan);
+		vlan_num = tfp_cpu_to_be_32(vlan_num);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM],
+		       &vlan_num,
+		       sizeof(uint32_t));
+		vlan_size = tfp_cpu_to_be_32(vlan_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ],
+		       &vlan_size,
+		       sizeof(uint32_t));
+	}
+
+	/* L3 must be IPv4, IPv6 */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		ipv4_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV4_SIZE;
+
+		/* copy the ipv4 details */
+		if (ulp_buffer_is_empty(&ipv4_spec->hdr.version_ihl,
+					BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS)) {
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      def_ipv4_hdr,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		} else {
+			const uint8_t *tmp_buff;
+
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      &ipv4_spec->hdr.version_ihl,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS);
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+			     BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS];
+			tmp_buff = (const uint8_t *)&ipv4_spec->hdr.packet_id;
+			ulp_encap_buffer_copy(buff,
+					      tmp_buff,
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		}
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+		    BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+		    BNXT_ULP_ENCAP_IPV4_ID_PROTO];
+		ulp_encap_buffer_copy(buff,
+				      (const uint8_t *)&ipv4_spec->hdr.dst_addr,
+				      BNXT_ULP_ENCAP_IPV4_DEST_IP);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		/* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV4);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		ipv6_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV6_SIZE;
+
+		/* copy the ipv4 details */
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP],
+		       ipv6_spec, BNXT_ULP_ENCAP_IPV6_SIZE);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		 /* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV6);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan Encap expects L3 hdr\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* L4 is UDP */
+	if (item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have udp\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	/* copy the udp details */
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP],
+			      item->spec, BNXT_ULP_ENCAP_UDP_SIZE);
+
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* Finally VXLAN */
+	if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have vni\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	vxlan_size = sizeof(struct rte_flow_item_vxlan);
+	/* copy the vxlan details */
+	memcpy(&vxlan_spec, item->spec, vxlan_size);
+	vxlan_spec.flags = 0x08;
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN],
+			      (const uint8_t *)&vxlan_spec,
+			      vxlan_size);
+	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
+	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
+	       &vxlan_size, sizeof(uint32_t));
+
+	/*update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_ENCAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action *action_item
+				__rte_unused,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_DECAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* Update the hdr_bitmap with drop */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_DROP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action *action_item,
+			  struct ulp_rte_act_bitmap *act,
+			  struct ulp_rte_act_prop *act_prop __rte_unused)
+
+{
+	const struct rte_flow_action_count *act_count;
+
+	act_count = action_item->conf;
+	if (act_count) {
+		if (act_count->shared) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:Shared count not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_COUNT],
+		       &act_count->id,
+		       BNXT_ULP_ACT_PROP_SZ_COUNT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_COUNT);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	uint8_t *svif_buf;
+	uint8_t *vnic_buffer;
+	uint32_t svif;
+
+	/* Update the hdr_bitmap with vnic bit */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+
+	/* copy the PF of the current device into VNIC Property */
+	svif_buf = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_read(svif_buf, &svif);
+	vnic_buffer = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_write(vnic_buffer, svif);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_vf *vf_action;
+
+	vf_action = action_item->conf;
+	if (vf_action) {
+		if (vf_action->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:VF Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using VF conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &vf_action->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
+			    struct ulp_rte_act_bitmap *act,
+			    struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_port_id *port_id;
+
+	port_id = act_item->conf;
+	if (port_id) {
+		if (port_id->original) {
+			BNXT_TF_DBG(ERR,
+				    "ParseErr:Portid Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using port conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &port_id->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
+			     struct ulp_rte_act_bitmap *act,
+			     struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_phy_port *phy_port;
+
+	phy_port = action_item->conf;
+	if (phy_port) {
+		if (phy_port->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Err:Port Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &phy_port->index,
+		       BNXT_ULP_ACT_PROP_SZ_VPORT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VPORT);
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 3a7845d..0ab43d2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -12,6 +12,14 @@
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+/* defines to be used in the tunnel header parsing */
+#define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS	2
+#define BNXT_ULP_ENCAP_IPV4_ID_PROTO		6
+#define BNXT_ULP_ENCAP_IPV4_DEST_IP		4
+#define BNXT_ULP_ENCAP_IPV4_SIZE		12
+#define BNXT_ULP_ENCAP_IPV6_SIZE		8
+#define BNXT_ULP_ENCAP_UDP_SIZE			4
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -21,6 +29,15 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
 			      struct ulp_rte_hdr_field  *hdr_field);
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action	actions[],
+			      struct ulp_rte_act_bitmap		*act_bitmap,
+			      struct ulp_rte_act_prop		*act_prop);
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 int32_t
 ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
@@ -45,7 +62,7 @@ ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
 			    uint32_t			*field_idx,
 			    uint32_t			*vlan_idx);
 
-/* Function to handle the parsing of RTE Flow item port id Header. */
+/* Function to handle the parsing of RTE Flow item port Header. */
 int32_t
 ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
 			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
@@ -117,4 +134,70 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
 			 uint32_t			*field_idx,
 			 uint32_t			*vlan_idx);
 
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action	*action_item,
+			struct ulp_rte_act_bitmap	*act,
+			struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action	*action_item,
+			  struct ulp_rte_act_bitmap	*act,
+			  struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action	*act_item,
+			    struct ulp_rte_act_bitmap		*act,
+			    struct ulp_rte_act_prop		*act_p);
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action	*action_item,
+			     struct ulp_rte_act_bitmap		*act,
+			     struct ulp_rte_act_prop		*act_prop);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 1d15563..89572c7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -96,6 +96,205 @@ uint32_t ulp_act_prop_map_table[] = {
 		BNXT_ULP_ACT_PROP_SZ_LAST
 };
 
+struct bnxt_ulp_rte_act_info ulp_act_info[] = {
+	[RTE_FLOW_ACTION_TYPE_END] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_END,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VOID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_void_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PASSTHRU] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_JUMP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MARK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_mark_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_FLAG] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_QUEUE] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DROP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_drop_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_COUNT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_count_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_RSS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_rss_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_pf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PHY_PORT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_phy_port_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PORT_ID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_port_id_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_METER] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SECURITY] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_OUT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_IN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_encap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_decap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MAC_SWAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	}
+};
+
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
 		.global_fid_enable       = BNXT_ULP_SYM_YES,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 906b542..dfab266 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -74,6 +74,13 @@ enum bnxt_ulp_hdr_bit {
 	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
 };
 
+enum bnxt_ulp_act_type {
+	BNXT_ULP_ACT_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_ACT_TYPE_SUPPORTED = 1,
+	BNXT_ULP_ACT_TYPE_END = 2,
+	BNXT_ULP_ACT_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0699634..47c0dd8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -72,6 +72,19 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Parser Action Information Structure */
+struct bnxt_ulp_rte_act_info {
+	enum bnxt_ulp_act_type					act_type;
+	/* Flow Parser Protocol Action Function Prototype */
+	int32_t (*proto_act_func)
+		(const struct rte_flow_action			*action_item,
+		struct ulp_rte_act_bitmap			*act_bitmap,
+		struct ulp_rte_act_prop				*act_prop);
+};
+
+/* Flow Parser Action Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_act_info	ulp_act_info[];
+
 /* Flow Matcher structures */
 struct bnxt_ulp_header_match_info {
 	struct ulp_rte_hdr_bitmap		hdr_bitmap;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 27/34] net/bnxt: add support for rte flow create driver hook
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (25 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
                     ` (8 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, calls ulp_mapper_flow_create to program
   key & action tables

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 177 ++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 5e2d751..5ed33cc 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_rte_parser.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp_flow.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
new file mode 100644
index 0000000..6402dd3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_matcher.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+#include <rte_malloc.h>
+
+static int32_t
+bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item pattern[],
+			    const struct rte_flow_action actions[],
+			    struct rte_flow_error *error)
+{
+	/* Perform the validation of the arguments for null */
+	if (!error)
+		return BNXT_TF_RC_ERROR;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL,
+				   "NULL pattern.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL,
+				   "NULL action.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL,
+				   "NULL attribute.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (attr->egress && attr->ingress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   attr,
+				   "EGRESS AND INGRESS UNSUPPORTED");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to create the rte flow. */
+static struct rte_flow *
+bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
+		     const struct rte_flow_attr		*attr,
+		     const struct rte_flow_item		pattern[],
+		     const struct rte_flow_action	actions[],
+		     struct rte_flow_error		*error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	uint32_t app_priority;
+	int ret;
+	struct bnxt_ulp_context *ulp_ctx = NULL;
+	uint32_t vnic;
+	uint8_t svif;
+	struct rte_flow *flow_id;
+	uint32_t fid;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return NULL;
+	}
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return NULL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	svif = bnxt_get_svif(dev->data->port_id, false);
+	BNXT_TF_DBG(ERR, "SVIF for port[%d][port]=0x%08x\n",
+		    dev->data->port_id, svif);
+
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec[0] = svif;
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask[0] = -1;
+	ULP_BITMAP_SET(hdr_bitmap.bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	/*
+	 * VNIC is being pushed as 32bit and the pop will take care of
+	 * proper size
+	 */
+	vnic = (uint32_t)bnxt_get_vnic_id(dev->data->port_id);
+	vnic = htonl(vnic);
+	rte_memcpy(&act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		   &vnic, BNXT_ULP_ACT_PROP_SZ_VNIC);
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	app_priority = attr->priority;
+	/* call the ulp mapper to create the flow in the hardware */
+	ret = ulp_mapper_flow_create(ulp_ctx,
+				     app_priority,
+				     &hdr_bitmap,
+				     hdr_field,
+				     &act_bitmap,
+				     &act_prop,
+				     class_id,
+				     act_tmpl,
+				     &fid);
+	if (!ret) {
+		flow_id = (struct rte_flow *)((uintptr_t)fid);
+		return flow_id;
+	}
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to create flow.");
+	return NULL;
+}
+
+const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
+	.validate = NULL,
+	.create = bnxt_ulp_flow_create,
+	.destroy = NULL,
+	.flush = NULL,
+	.query = NULL,
+	.isolate = NULL
+};
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 28/34] net/bnxt: add support for rte flow validate driver hook
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (26 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
                     ` (7 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, returns success otherwise failure

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 67 ++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6402dd3..490b2ba 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -167,8 +167,73 @@ bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
 	return NULL;
 }
 
+/* Function to validate the rte flow. */
+static int
+bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
+		       const struct rte_flow_attr *attr,
+		       const struct rte_flow_item pattern[],
+		       const struct rte_flow_action actions[],
+		       struct rte_flow_error *error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	int ret;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return -EINVAL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* all good return success */
+	return ret;
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to validate flow.");
+	return -EINVAL;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
-	.validate = NULL,
+	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = NULL,
 	.flush = NULL,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 29/34] net/bnxt: add support for rte flow destroy driver hook
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (27 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
                     ` (6 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the flow associated with the flow id from the flow table
3. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with that flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 490b2ba..35099a3 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -232,10 +232,40 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
 	return -EINVAL;
 }
 
+/* Function to destroy the rte flow. */
+static int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
+		      struct rte_flow *flow,
+		      struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	uint32_t fid;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+		return -EINVAL;
+	}
+
+	fid = (uint32_t)(uintptr_t)flow;
+
+	ret = ulp_mapper_flow_destroy(ulp_ctx, fid);
+	if (ret)
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
-	.destroy = NULL,
+	.destroy = bnxt_ulp_flow_destroy,
 	.flush = NULL,
 	.query = NULL,
 	.isolate = NULL
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 30/34] net/bnxt: add support for rte flow flush driver hook
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (28 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
                     ` (5 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the rte_flow table associated with this session
3. Iterates through all the flows in the flow table
4. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with each flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c      |  3 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 33 +++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c   | 69 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h   | 11 ++++++
 4 files changed, 115 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 3795c6d..56e08f2 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -517,6 +517,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	if (!session)
 		return;
 
+	/* clean up regular flows */
+	ulp_flow_db_flush_flows(&bp->ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+
 	/* cleanup the eem table scope */
 	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 35099a3..4958895 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -262,11 +262,42 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Function to destroy the rte flows. */
+static int32_t
+bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
+		    struct rte_flow_error *error)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int32_t ret;
+	struct bnxt *bp;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+		return -EINVAL;
+	}
+	bp = eth_dev->data->dev_private;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return 0;
+
+	ret = ulp_flow_db_flush_flows(ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
-	.flush = NULL,
+	.flush = bnxt_ulp_flow_flush,
 	.query = NULL,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 76ec856..68ba6d4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -555,3 +555,72 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/** Get the flow database entry iteratively
+ *
+ * flow_tbl [in] Ptr to flow table
+ * fid [in/out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+static int32_t
+ulp_flow_db_next_entry_get(struct bnxt_ulp_flow_tbl	*flowtbl,
+			   uint32_t			*fid)
+{
+	uint32_t	lfid = *fid;
+	uint32_t	idx;
+	uint64_t	bs;
+
+	do {
+		lfid++;
+		if (lfid >= flowtbl->num_flows)
+			return -ENOENT;
+		idx = lfid / ULP_INDEX_BITMAP_SIZE;
+		while (!(bs = flowtbl->active_flow_tbl[idx])) {
+			idx++;
+			if ((idx * ULP_INDEX_BITMAP_SIZE) >= flowtbl->num_flows)
+				return -ENOENT;
+		}
+		lfid = (idx * ULP_INDEX_BITMAP_SIZE) + __builtin_clzl(bs);
+		if (*fid >= lfid) {
+			BNXT_TF_DBG(ERR, "Flow Database is corrupt\n");
+			return -ENOENT;
+		}
+	} while (!ulp_flow_db_active_flow_is_set(flowtbl, lfid));
+
+	/* all good, return success */
+	*fid = lfid;
+	return 0;
+}
+
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx)
+{
+	uint32_t			fid = 0;
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid Argument\n");
+		return -EINVAL;
+	}
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Flow database not found\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[idx];
+	while (!ulp_flow_db_next_entry_get(flow_tbl, &fid))
+		(void)ulp_mapper_resources_free(ulp_ctx, fid, idx);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index eb5effa..5435415 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -142,4 +142,15 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 			     enum bnxt_ulp_flow_db_tables	tbl_idx,
 			     uint32_t				fid);
 
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx);
+
 #endif /* _ULP_FLOW_DB_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 31/34] net/bnxt: register tf rte flow ops
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (29 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
                     ` (4 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Register bnxt_ulp_rte_flow_ops when host based TRUFLOW is
enabled.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 1 +
 drivers/net/bnxt/bnxt_ethdev.c | 6 +++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index cd20740..a70cdff 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -731,6 +731,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2f08921..783e6a4 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3288,6 +3288,7 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 		    enum rte_filter_type filter_type,
 		    enum rte_filter_op filter_op, void *arg)
 {
+	struct bnxt *bp = dev->data->dev_private;
 	int ret = 0;
 
 	ret = is_bnxt_in_error(dev->data->dev_private);
@@ -3311,7 +3312,10 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_GENERIC:
 		if (filter_op != RTE_ETH_FILTER_GET)
 			return -EINVAL;
-		*(const void **)arg = &bnxt_flow_ops;
+		if (bp->truflow)
+			*(const void **)arg = &bnxt_ulp_rte_flow_ops;
+		else
+			*(const void **)arg = &bnxt_flow_ops;
 		break;
 	default:
 		PMD_DRV_LOG(ERR,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (30 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
                     ` (3 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

If bp->truflow is not set then don't enable vector mode.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 783e6a4..5d5b8e0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -788,7 +788,8 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_VLAN_FILTER))) {
+		DEV_RX_OFFLOAD_VLAN_FILTER)) &&
+	    !bp->truflow) {
 		PMD_DRV_LOG(INFO, "Using vector mode receive for port %d\n",
 			    eth_dev->data->port_id);
 		bp->flags |= BNXT_FLAG_RX_VECTOR_PKT_MODE;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 33/34] net/bnxt: add support for injecting mark into packet’s mbuf
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (31 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
                     ` (2 subsequent siblings)
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

When a flow is offloaded with MARK action (RTE_FLOW_ACTION_TYPE_MARK),
each packet of that flow will have metadata set in its completion.
This metadata will be used to fetch an index into a mark table where
the actual MARK for that flow is stored. Fetch the MARK from the mark
table and inject it into packet’s mbuf.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c            | 153 ++++++++++++++++++++++++---------
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  55 +++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 ++++
 3 files changed, 183 insertions(+), 43 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index bef9720..40da2f2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -20,6 +20,9 @@
 #include "bnxt_hwrm.h"
 #endif
 
+#include <bnxt_tf_common.h>
+#include <ulp_mark_mgr.h>
+
 /*
  * RX Ring handling
  */
@@ -399,6 +402,109 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
+static void
+bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
+			  struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code;
+	uint32_t meta_fmt;
+	uint32_t meta;
+	uint32_t eem = 0;
+	uint32_t mark_id;
+	uint32_t flags2;
+	int rc;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	flags2 = rte_le_to_cpu_32(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			    BNXT_CFA_META_FMT_SHFT;
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+
+		eem = meta_fmt == BNXT_CFA_META_FMT_EEM;
+
+		/* For EEM flows, The first part of cfa_code is 16 bits.
+		 * The second part is embedded in the
+		 * metadata field from bit 19 onwards. The driver needs to
+		 * ignore the first 19 bits of metadata and use the next 12
+		 * bits as higher 12 bits of cfa_code.
+		 */
+		if (eem)
+			cfa_code |= meta << BNXT_CFA_CODE_META_SHIFT;
+	}
+
+	if (cfa_code) {
+		mbuf->hash.fdir.hi = 0;
+		mbuf->hash.fdir.id = 0;
+		if (eem)
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, true,
+						  cfa_code, &mark_id);
+		else
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, false,
+						  cfa_code, &mark_id);
+		/* If the above fails, simply return and don't add the mark to
+		 * mbuf
+		 */
+		if (rc)
+			return;
+
+		mbuf->hash.fdir.hi	= mark_id;
+		mbuf->udata64		= (cfa_code & 0xffffffffull) << 32;
+		mbuf->hash.fdir.id	= rxcmp1->cfa_code;
+		mbuf->ol_flags		|= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	}
+}
+
+void bnxt_set_mark_in_mbuf(struct bnxt *bp,
+			   struct rx_pkt_cmpl_hi *rxcmp1,
+			   struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code = 0;
+	uint8_t meta_fmt = 0;
+	uint16_t flags2 = 0;
+	uint32_t meta =  0;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	if (!cfa_code)
+		return;
+
+	if (cfa_code && !bp->mark_table[cfa_code].valid)
+		return;
+
+	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			   BNXT_CFA_META_FMT_SHFT;
+
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+	}
+
+	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
+	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+}
+
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
@@ -415,6 +521,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	uint16_t cmp_type;
 	uint32_t flags2_f = 0;
 	uint16_t flags_type;
+	struct bnxt *bp = rxq->bp;
 
 	rxcmp = (struct rx_pkt_cmpl *)
 	    &cpr->cp_desc_ring[cp_cons];
@@ -490,7 +597,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		mbuf->ol_flags |= PKT_RX_RSS_HASH;
 	}
 
-	bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+	if (bp->truflow)
+		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+	else
+		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
@@ -896,44 +1006,3 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 
 	return 0;
 }
-
-void bnxt_set_mark_in_mbuf(struct bnxt *bp,
-			   struct rx_pkt_cmpl_hi *rxcmp1,
-			   struct rte_mbuf *mbuf)
-{
-	uint32_t cfa_code = 0;
-	uint8_t meta_fmt =  0;
-	uint16_t flags2 = 0;
-	uint32_t meta =  0;
-
-	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
-	if (!cfa_code)
-		return;
-
-	if (cfa_code && !bp->mark_table[cfa_code].valid)
-		return;
-
-	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
-	meta = rte_le_to_cpu_32(rxcmp1->metadata);
-	if (meta) {
-		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
-
-		/*
-		 * The flags field holds extra bits of info from [6:4]
-		 * which indicate if the flow is in TCAM or EM or EEM
-		 */
-		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
-			   BNXT_CFA_META_FMT_SHFT;
-
-		/*
-		 * meta_fmt == 4 => 'b100 => 'b10x => EM.
-		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
-		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
-		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
-		 */
-		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
-	}
-
-	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 566668e..ad83531 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -58,7 +58,7 @@ ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
 	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
 
 	if (is_gfid) {
-		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+		BNXT_TF_DBG(DEBUG, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
 
 		mtbl->gfid_tbl[idx].mark_id = mark;
 		mtbl->gfid_tbl[idx].valid = true;
@@ -176,6 +176,59 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 }
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+	uint32_t idx = 0;
+
+	if (!ctxt || !mark)
+		return -EINVAL;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark Table\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get GFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->gfid_tbl[idx].mark_id);
+
+		*mark = mtbl->gfid_tbl[idx].mark_id;
+	} else {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get LFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->lfid_tbl[idx].mark_id);
+
+		*mark = mtbl->lfid_tbl[idx].mark_id;
+	}
+
+	return 0;
+}
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index f0d1515..0f8a5e5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -55,6 +55,24 @@ int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark);
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v2 34/34] net/bnxt: enable meson build on truflow code
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (32 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
@ 2020-04-13 19:40   ` Venkat Duvvuru
  2020-04-13 21:35   ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Thomas Monjalon
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
  35 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-13 19:40 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Include tf_ulp & tf_core directories and the files inside them.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 0c311d2..d75f887 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -1,7 +1,12 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2020 Broadcom
 
 install_headers('rte_pmd_bnxt.h')
+
+includes += include_directories('tf_ulp')
+includes += include_directories('tf_core')
+
 sources = files('bnxt_cpr.c',
 	'bnxt_ethdev.c',
 	'bnxt_filter.c',
@@ -16,6 +21,27 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+
+	'tf_core/tf_core.c',
+	'tf_core/bitalloc.c',
+	'tf_core/tf_msg.c',
+	'tf_core/rand.c',
+	'tf_core/stack.c',
+	'tf_core/tf_em.c',
+	'tf_core/tf_rm.c',
+	'tf_core/tf_tbl.c',
+	'tf_core/tfp.c',
+
+	'tf_ulp/bnxt_ulp.c',
+	'tf_ulp/ulp_mark_mgr.c',
+	'tf_ulp/ulp_flow_db.c',
+	'tf_ulp/ulp_template_db.c',
+	'tf_ulp/ulp_utils.c',
+	'tf_ulp/ulp_mapper.c',
+	'tf_ulp/ulp_matcher.c',
+	'tf_ulp/ulp_rte_parser.c',
+	'tf_ulp/bnxt_ulp_flow.c',
+
 	'rte_pmd_bnxt.c')
 
 if arch_subdir == 'x86'
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (33 preceding siblings ...)
  2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
@ 2020-04-13 21:35   ` Thomas Monjalon
  2020-04-15  8:56     ` Venkat Duvvuru
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
  35 siblings, 1 reply; 154+ messages in thread
From: Thomas Monjalon @ 2020-04-13 21:35 UTC (permalink / raw)
  To: Venkat Duvvuru; +Cc: dev

13/04/2020 21:39, Venkat Duvvuru:
> This patchset introduces a new mechanism to allow host-memory based
> flow table management. This should allow higher flow scalability
> than what is currently supported. This new approach also defines a
> new rte_flow parser, and mapper which currently supports basic packet
> classification in receive path. The patchset uses a newly implemented
> control-plane firmware interface which optimizes flow insertions and
> deletions.
> 
> This is a baseline patchset with limited scale. Follow on patches will
> add support for more protocol headers, rte_flow attributes, actions
> and such.

It seems this patchset is adding features but I don't see any
documentation update. Should you update the feature list?
	doc/guides/nics/features/bnxt.ini




^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 00/34] add support for host based flow table management
  2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
                     ` (34 preceding siblings ...)
  2020-04-13 21:35   ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Thomas Monjalon
@ 2020-04-14  8:12   ` Venkat Duvvuru
  2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
                       ` (34 more replies)
  35 siblings, 35 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:12 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This patchset introduces a new mechanism to allow host-memory based
flow table management. This should allow higher flow scalability
than what is currently supported. This new approach also defines a
new rte_flow parser, and mapper which currently supports basic packet
classification in receive path. The patchset uses a newly implemented
control-plane firmware interface which optimizes flow insertions and
deletions.

This is a baseline patchset with limited scale. Follow on patches will
add support for more protocol headers, rte_flow attributes, actions
and such.

Currently the code path is disabled by default and can be enabled
using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.

v2==>v3
=======
1. Fixed compilation issues reported by CI

Ajit Kumar Khaparde (1):
  net/bnxt: add updated dpdk hsi structure

Farah Smith (2):
  net/bnxt: add tf core identifier support
  net/bnxt: add tf core table scope support

Kishore Padmanabha (8):
  net/bnxt: match rte flow items with flow template patterns
  net/bnxt: match rte flow actions with flow template actions
  net/bnxt: add support for rte flow item parsing
  net/bnxt: add support for rte flow action parsing
  net/bnxt: add support for rte flow create driver hook
  net/bnxt: add support for rte flow validate driver hook
  net/bnxt: add support for rte flow destroy driver hook
  net/bnxt: add support for rte flow flush driver hook

Michael Wildt (4):
  net/bnxt: add initial tf core session open
  net/bnxt: add initial tf core session close support
  net/bnxt: add tf core session sram functions
  net/bnxt: add resource manager functionality

Mike Baucom (5):
  net/bnxt: add helper functions for blob/regfile ops
  net/bnxt: add support to process action tables
  net/bnxt: add support to process key tables
  net/bnxt: add support to free key and action tables
  net/bnxt: add support to alloc and program key and act tbls

Pete Spreadborough (2):
  net/bnxt: add truflow message handlers
  net/bnxt: add EM/EEM functionality

Randy Schacher (1):
  net/bnxt: update hwrm prep to use ptr

Shahaji Bhosle (2):
  net/bnxt: add initial tf core resource mgmt support
  net/bnxt: add tf core TCAM support

Venkat Duvvuru (9):
  net/bnxt: fetch SVIF information from the firmware
  net/bnxt: fetch vnic info from DPDK port
  net/bnxt: add devargs parameter for host memory based TRUFLOW feature
  net/bnxt: add support for ULP session manager init
  net/bnxt: add support for ULP session manager cleanup
  net/bnxt: register tf rte flow ops
  net/bnxt: disable vector mode when host based TRUFLOW is enabled
  net/bnxt: add support for injecting mark into packet’s mbuf
  net/bnxt: enable meson build on truflow code

 drivers/net/bnxt/Makefile                       |   24 +
 drivers/net/bnxt/bnxt.h                         |   21 +-
 drivers/net/bnxt/bnxt_ethdev.c                  |  118 +-
 drivers/net/bnxt/bnxt_hwrm.c                    |  319 +-
 drivers/net/bnxt/bnxt_hwrm.h                    |   19 +
 drivers/net/bnxt/bnxt_rxr.c                     |  153 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h          | 3786 ++++++++++++++++++++---
 drivers/net/bnxt/meson.build                    |   26 +
 drivers/net/bnxt/tf_core/bitalloc.c             |  364 +++
 drivers/net/bnxt/tf_core/bitalloc.h             |  119 +
 drivers/net/bnxt/tf_core/hwrm_tf.h              |  992 ++++++
 drivers/net/bnxt/tf_core/lookup3.h              |  162 +
 drivers/net/bnxt/tf_core/rand.c                 |   47 +
 drivers/net/bnxt/tf_core/rand.h                 |   36 +
 drivers/net/bnxt/tf_core/stack.c                |  107 +
 drivers/net/bnxt/tf_core/stack.h                |  107 +
 drivers/net/bnxt/tf_core/tf_core.c              |  659 ++++
 drivers/net/bnxt/tf_core/tf_core.h              | 1376 ++++++++
 drivers/net/bnxt/tf_core/tf_em.c                |  515 +++
 drivers/net/bnxt/tf_core/tf_em.h                |  117 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h   |  166 +
 drivers/net/bnxt/tf_core/tf_msg.c               | 1248 ++++++++
 drivers/net/bnxt/tf_core/tf_msg.h               |  256 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h        |   47 +
 drivers/net/bnxt/tf_core/tf_project.h           |   24 +
 drivers/net/bnxt/tf_core/tf_resources.h         |  542 ++++
 drivers/net/bnxt/tf_core/tf_rm.c                | 3297 ++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h                |  321 ++
 drivers/net/bnxt/tf_core/tf_session.h           |  300 ++
 drivers/net/bnxt/tf_core/tf_tbl.c               | 1836 +++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.h               |  126 +
 drivers/net/bnxt/tf_core/tfp.c                  |  163 +
 drivers/net/bnxt/tf_core/tfp.h                  |  188 ++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h        |   54 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c              |  695 +++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h              |  110 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c         |  303 ++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c           |  626 ++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h           |  156 +
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 1502 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   69 +
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  271 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  111 +
 drivers/net/bnxt/tf_ulp/ulp_matcher.c           |  188 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h           |   35 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c        | 1208 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h        |  203 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 1713 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       |  354 +++
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h |  133 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  266 ++
 drivers/net/bnxt/tf_ulp/ulp_utils.c             |  521 ++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h             |  279 ++
 53 files changed, 25883 insertions(+), 495 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 01/34] net/bnxt: add updated dpdk hsi structure
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
@ 2020-04-14  8:12     ` Venkat Duvvuru
  2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
                       ` (33 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:12 UTC (permalink / raw)
  To: dev; +Cc: Ajit Kumar Khaparde, Randy Schacher

From: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>

- Add most recent bnxt dpdk header.
- HWRM version updated to 1.10.1.30

Signed-off-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 3786 +++++++++++++++++++++++++++++---
 1 file changed, 3436 insertions(+), 350 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index c2bae0f..cde96e7 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 2014-2019 Broadcom Inc.
+ * Copyright (c) 2014-2020 Broadcom Inc.
  * All rights reserved.
  *
  * DO NOT MODIFY!!! This file is automatically generated.
@@ -386,6 +386,8 @@ struct cmd_nums {
 	#define HWRM_PORT_PHY_MDIO_READ                   UINT32_C(0xb6)
 	#define HWRM_PORT_PHY_MDIO_BUS_ACQUIRE            UINT32_C(0xb7)
 	#define HWRM_PORT_PHY_MDIO_BUS_RELEASE            UINT32_C(0xb8)
+	#define HWRM_PORT_QSTATS_EXT_PFC_WD               UINT32_C(0xb9)
+	#define HWRM_PORT_ECN_QSTATS                      UINT32_C(0xba)
 	#define HWRM_FW_RESET                             UINT32_C(0xc0)
 	#define HWRM_FW_QSTATUS                           UINT32_C(0xc1)
 	#define HWRM_FW_HEALTH_CHECK                      UINT32_C(0xc2)
@@ -404,6 +406,8 @@ struct cmd_nums {
 	#define HWRM_FW_GET_STRUCTURED_DATA               UINT32_C(0xcb)
 	/* Experimental */
 	#define HWRM_FW_IPC_MAILBOX                       UINT32_C(0xcc)
+	#define HWRM_FW_ECN_CFG                           UINT32_C(0xcd)
+	#define HWRM_FW_ECN_QCFG                          UINT32_C(0xce)
 	#define HWRM_EXEC_FWD_RESP                        UINT32_C(0xd0)
 	#define HWRM_REJECT_FWD_RESP                      UINT32_C(0xd1)
 	#define HWRM_FWD_RESP                             UINT32_C(0xd2)
@@ -419,6 +423,7 @@ struct cmd_nums {
 	#define HWRM_TEMP_MONITOR_QUERY                   UINT32_C(0xe0)
 	#define HWRM_REG_POWER_QUERY                      UINT32_C(0xe1)
 	#define HWRM_CORE_FREQUENCY_QUERY                 UINT32_C(0xe2)
+	#define HWRM_REG_POWER_HISTOGRAM                  UINT32_C(0xe3)
 	#define HWRM_WOL_FILTER_ALLOC                     UINT32_C(0xf0)
 	#define HWRM_WOL_FILTER_FREE                      UINT32_C(0xf1)
 	#define HWRM_WOL_FILTER_QCFG                      UINT32_C(0xf2)
@@ -510,7 +515,7 @@ struct cmd_nums {
 	#define HWRM_CFA_EEM_OP                           UINT32_C(0x123)
 	/* Experimental */
 	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS              UINT32_C(0x124)
-	/* Experimental */
+	/* Experimental - DEPRECATED */
 	#define HWRM_CFA_TFLIB                            UINT32_C(0x125)
 	/* Engine CKV - Get the current allocation status of keys provisioned in the key vault. */
 	#define HWRM_ENGINE_CKV_STATUS                    UINT32_C(0x12e)
@@ -629,6 +634,56 @@ struct cmd_nums {
 	 * to the host test.
 	 */
 	#define HWRM_MFG_HDMA_TEST                        UINT32_C(0x209)
+	/* Tells the fw to program the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_WRITE                 UINT32_C(0x20a)
+	/* Tells the fw to read the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_READ                  UINT32_C(0x20b)
+	/* Experimental */
+	#define HWRM_TF                                   UINT32_C(0x2bc)
+	/* Experimental */
+	#define HWRM_TF_VERSION_GET                       UINT32_C(0x2bd)
+	/* Experimental */
+	#define HWRM_TF_SESSION_OPEN                      UINT32_C(0x2c6)
+	/* Experimental */
+	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
+	/* Experimental */
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	/* Experimental */
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	/* Experimental */
+	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
 	#define HWRM_DBG_READ_DIRECT                      UINT32_C(0xff10)
 	/* Experimental */
@@ -658,6 +713,8 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_HEADER                 UINT32_C(0xff1d)
 	/* Experimental */
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
+	/* Send driver debug information to firmware */
+	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -857,8 +914,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 6
-#define HWRM_VERSION_STR "1.10.1.6"
+#define HWRM_VERSION_RSVD 30
+#define HWRM_VERSION_STR "1.10.1.30"
 
 /****************
  * hwrm_ver_get *
@@ -1143,6 +1200,7 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_ADV_FLOW_MGNT_SUPPORTED \
 		UINT32_C(0x1000)
 	/*
+	 * Deprecated and replaced with cfa_truflow_supported.
 	 * If set to 1, the firmware is able to support TFLIB features.
 	 * If set to 0, then the firmware doesn’t support TFLIB features.
 	 * By default, this flag should be 0 for older version of core firmware.
@@ -1150,6 +1208,14 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TFLIB_SUPPORTED \
 		UINT32_C(0x2000)
 	/*
+	 * If set to 1, the firmware is able to support TruFlow features.
+	 * If set to 0, then the firmware doesn’t support TruFlow features.
+	 * By default, this flag should be 0 for older version of
+	 * core firmware.
+	 */
+	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TRUFLOW_SUPPORTED \
+		UINT32_C(0x4000)
+	/*
 	 * This field represents the major version of RoCE firmware.
 	 * A change in major version represents a major release.
 	 */
@@ -4508,10 +4574,16 @@ struct hwrm_async_event_cmpl {
 	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_EEM_CFG_CHANGE \
 		UINT32_C(0x3c)
-	/* TFLIB unique default VNIC Configuration Change */
+	/*
+	 * Deprecated.
+	 * TFLIB unique default VNIC Configuration Change
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_DEFAULT_VNIC_CHANGE \
 		UINT32_C(0x3d)
-	/* TFLIB unique link status changed */
+	/*
+	 * Deprecated.
+	 * TFLIB unique link status changed
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_LINK_STATUS_CHANGE \
 		UINT32_C(0x3e)
 	/*
@@ -4521,6 +4593,19 @@ struct hwrm_async_event_cmpl {
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_QUIESCE_DONE \
 		UINT32_C(0x3f)
 	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred. This event is used on crypto controllers
+	 * only.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	/*
+	 * An event signifying that a PFC WatchDog configuration
+	 * has changed on any port / cos.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	/*
 	 * A trace log message. This contains firmware trace logs string
 	 * embedded in the asynchronous message. This is an experimental
 	 * event, not meant for production use at this time.
@@ -6393,6 +6478,36 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x2)
 	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_LAST \
 		HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_ERROR
+	/* opaque is 8 b */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_MASK \
+		UINT32_C(0xff00)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_SFT \
+		8
+	/*
+	 * Additional information about internal hardware state related to
+	 * idle/quiesce state.  QUIESCE may succeed per quiesce_status
+	 * regardless of idle_state_flags.  If QUIESCE fails, the host may
+	 * inspect idle_state_flags to determine whether a retry is warranted.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_MASK \
+		UINT32_C(0xff0000)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_SFT \
+		16
+	/*
+	 * Failure to quiesce is caused by host not updating the NQ consumer
+	 * index.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_INCOMPLETE_NQ \
+		UINT32_C(0x10000)
+	/* Flag 1 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_1 \
+		UINT32_C(0x20000)
+	/* Flag 2 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_2 \
+		UINT32_C(0x40000)
+	/* Flag 3 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_3 \
+		UINT32_C(0x80000)
 	uint8_t	opaque_v;
 	/*
 	 * This value is written by the NIC such that it will be different
@@ -6414,6 +6529,152 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x1)
 } __attribute__((packed));
 
+/* hwrm_async_event_cmpl_deferred_response (size:128b/16B) */
+struct hwrm_async_event_cmpl_deferred_response {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_SFT             0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE
+	/* Event specific data */
+	uint32_t	event_data2;
+	/*
+	 * The PF's mailbox is clear to issue another command.
+	 * A command with this seq_id is still in progress
+	 * and will return a regualr HWRM completion when done.
+	 * 'event_data1' field, if non-zero, contains the estimated
+	 * execution time for the command.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_MASK \
+		UINT32_C(0xffff)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_SFT \
+		0
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Estimated remaining time of command execution in ms (if not zero) */
+	uint32_t	event_data1;
+} __attribute__((packed));
+
+/* hwrm_async_event_cmpl_pfc_watchdog_cfg_change (size:128b/16B) */
+struct hwrm_async_event_cmpl_pfc_watchdog_cfg_change {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_SFT \
+		0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/* PFC watchdog configuration change for given port/cos */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE
+	/* Event specific data */
+	uint32_t	event_data2;
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Event specific data */
+	uint32_t	event_data1;
+	/*
+	 * 1 in bit position X indicates PFC watchdog should
+	 * be on for COSX
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_MASK \
+		UINT32_C(0xff)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_SFT \
+		0
+	/* 1 means PFC WD for COS0 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS0 \
+		UINT32_C(0x1)
+	/* 1 means PFC WD for COS1 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS1 \
+		UINT32_C(0x2)
+	/* 1 means PFC WD for COS2 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS2 \
+		UINT32_C(0x4)
+	/* 1 means PFC WD for COS3 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS3 \
+		UINT32_C(0x8)
+	/* 1 means PFC WD for COS4 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS4 \
+		UINT32_C(0x10)
+	/* 1 means PFC WD for COS5 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS5 \
+		UINT32_C(0x20)
+	/* 1 means PFC WD for COS6 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS6 \
+		UINT32_C(0x40)
+	/* 1 means PFC WD for COS7 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS7 \
+		UINT32_C(0x80)
+	/* PORT ID */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_MASK \
+		UINT32_C(0xffff00)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_SFT \
+		8
+} __attribute__((packed));
+
 /* hwrm_async_event_cmpl_fw_trace_msg (size:128b/16B) */
 struct hwrm_async_event_cmpl_fw_trace_msg {
 	uint16_t	type;
@@ -7220,7 +7481,7 @@ struct hwrm_func_qcaps_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcaps_output (size:640b/80B) */
+/* hwrm_func_qcaps_output (size:704b/88B) */
 struct hwrm_func_qcaps_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -7441,6 +7702,33 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_NOTIFY_VF_DEF_VNIC_CHNG_SUPPORTED \
 		UINT32_C(0x4000000)
+	/* If set to 1, then the vlan acceleration for TX is disabled. */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED \
+		UINT32_C(0x8000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_COREDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_COREDUMP_CMD_SUPPORTED \
+		UINT32_C(0x10000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_CRASHDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_CRASHDUMP_CMD_SUPPORTED \
+		UINT32_C(0x20000000)
+	/*
+	 * If the query is for a VF, then this flag should be ignored.
+	 * If the query is for a PF and this flag is set to 1, then
+	 * the PF has the capability to support retrieval of
+	 * rx_port_stats_ext_pfc_wd statistics (supported by the PFC
+	 * WatchDog feature) via the hwrm_port_qstats_ext_pfc_wd command.
+	 * If this flag is set to 1, only that (supported) command should
+	 * be used for retrieval of PFC related statistics (rather than
+	 * hwrm_port_qstats_ext command, which could previously be used).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
+		UINT32_C(0x40000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7551,7 +7839,22 @@ struct hwrm_func_qcaps_output {
 	 * (max_tx_rings) to the function.
 	 */
 	uint16_t	max_sp_tx_rings;
-	uint8_t	unused_0;
+	uint8_t	unused_0[2];
+	uint32_t	flags_ext;
+	/*
+	 * If 1, the device can be configured to set the ECN bits in the
+	 * IP header of received packets if the receive queue length
+	 * exceeds a given threshold.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_MARK_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * If 1, the device can report the number of received packets
+	 * that it marked as having experienced congestion.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
+		UINT32_C(0x2)
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -7606,7 +7909,7 @@ struct hwrm_func_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcfg_output (size:704b/88B) */
+/* hwrm_func_qcfg_output (size:768b/96B) */
 struct hwrm_func_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -8016,7 +8319,17 @@ struct hwrm_func_qcfg_output {
 	 * this value to find out the doorbell page offset from the BAR.
 	 */
 	uint16_t	legacy_l2_db_size_kb;
-	uint8_t	unused_2[1];
+	uint16_t	svif_info;
+	/*
+	 * This field specifies the source virtual interface of the function being
+	 * queried. Drivers can use this to program svif field in the L2 context
+	 * table
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK      UINT32_C(0x7fff)
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_SFT       0
+	/* This field specifies whether svif is valid or not */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID     UINT32_C(0x8000)
+	uint8_t	unused_2[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -9862,8 +10175,12 @@ struct hwrm_func_backing_store_qcaps_output {
 	uint32_t	rsvd;
 	/* Reserved for future. */
 	uint16_t	rsvd1;
-	/* Reserved for future. */
-	uint8_t	rsvd2;
+	/*
+	 * Count of TQM fastpath rings to be used for allocating backing store.
+	 * Backing store configuration must be specified for each TQM ring from
+	 * this count in `backing_store_cfg`.
+	 */
+	uint8_t	tqm_fp_rings_count;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -12178,116 +12495,163 @@ struct hwrm_error_recovery_qcfg_output {
 	 * this much time after writing reset_reg_val in reset_reg.
 	 */
 	uint8_t	delay_after_reset[16];
-	uint8_t	unused_1[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field
-	 * is written last.
-	 */
-	uint8_t	valid;
-} __attribute__((packed));
-
-/***********************
- * hwrm_func_vlan_qcfg *
- ***********************/
-
-
-/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
-struct hwrm_func_vlan_qcfg_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
+	 * Error recovery counter.
+	 * Lower 2 bits indicates address space location and upper 30 bits
+	 * indicates actual address.
+	 * A value of 0xFFFF-FFFF indicates this register does not exist.
 	 */
-	uint16_t	target_id;
+	uint32_t	err_recovery_cnt_reg;
+	/* Lower 2 bits indicates address space location. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_MASK \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_SFT \
+		0
 	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * If value is 0, this register is located in PCIe config space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint64_t	resp_addr;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_PCIE_CFG \
+		UINT32_C(0x0)
 	/*
-	 * Function ID of the function that is being
-	 * configured.
-	 * If set to 0xFF... (All Fs), then the configuration is
-	 * for the requesting function.
+	 * If value is 1, this register is located in GRC address space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	fid;
-	uint8_t	unused_0[6];
-} __attribute__((packed));
-
-/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
-struct hwrm_func_vlan_qcfg_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint64_t	unused_0;
-	/* S-TAG VLAN identifier configured for the function. */
-	uint16_t	stag_vid;
-	/* S-TAG PCP value configured for the function. */
-	uint8_t	stag_pcp;
-	uint8_t	unused_1;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_GRC \
+		UINT32_C(0x1)
 	/*
-	 * S-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 2, this register is located in first BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	stag_tpid;
-	/* C-TAG VLAN identifier configured for the function. */
-	uint16_t	ctag_vid;
-	/* C-TAG PCP value configured for the function. */
-	uint8_t	ctag_pcp;
-	uint8_t	unused_2;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR0 \
+		UINT32_C(0x2)
 	/*
-	 * C-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 3, this register is located in second BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	ctag_tpid;
-	/* Future use. */
-	uint32_t	rsvd2;
-	/* Future use. */
-	uint32_t	rsvd3;
-	uint8_t	unused_3[3];
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1 \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_LAST \
+		HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1
+	/* Upper 30bits of the register address. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_MASK \
+		UINT32_C(0xfffffffc)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SFT \
+		2
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
 	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/**********************
- * hwrm_func_vlan_cfg *
- **********************/
+/***********************
+ * hwrm_func_vlan_qcfg *
+ ***********************/
 
 
-/* hwrm_func_vlan_cfg_input (size:384b/48B) */
-struct hwrm_func_vlan_cfg_input {
+/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
+struct hwrm_func_vlan_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being
+	 * configured.
+	 * If set to 0xFF... (All Fs), then the configuration is
+	 * for the requesting function.
+	 */
+	uint16_t	fid;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
+struct hwrm_func_vlan_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint64_t	unused_0;
+	/* S-TAG VLAN identifier configured for the function. */
+	uint16_t	stag_vid;
+	/* S-TAG PCP value configured for the function. */
+	uint8_t	stag_pcp;
+	uint8_t	unused_1;
+	/*
+	 * S-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	stag_tpid;
+	/* C-TAG VLAN identifier configured for the function. */
+	uint16_t	ctag_vid;
+	/* C-TAG PCP value configured for the function. */
+	uint8_t	ctag_pcp;
+	uint8_t	unused_2;
+	/*
+	 * C-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	ctag_tpid;
+	/* Future use. */
+	uint32_t	rsvd2;
+	/* Future use. */
+	uint32_t	rsvd3;
+	uint8_t	unused_3[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_func_vlan_cfg *
+ **********************/
+
+
+/* hwrm_func_vlan_cfg_input (size:384b/48B) */
+struct hwrm_func_vlan_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -14039,6 +14403,9 @@ struct hwrm_port_phy_qcfg_output {
 	/* Module is not inserted. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTINSERTED \
 		UINT32_C(0x4)
+	/* Module is powered down becuase of over current fault. */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_CURRENTFAULT \
+		UINT32_C(0x5)
 	/* Module status is not applicable. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTAPPLICABLE \
 		UINT32_C(0xff)
@@ -15010,7 +15377,7 @@ struct hwrm_port_mac_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_port_mac_qcfg_output (size:192b/24B) */
+/* hwrm_port_mac_qcfg_output (size:256b/32B) */
 struct hwrm_port_mac_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -15250,6 +15617,20 @@ struct hwrm_port_mac_qcfg_output {
 		UINT32_C(0xe0)
 	#define HWRM_PORT_MAC_QCFG_OUTPUT_COS_FIELD_CFG_DEFAULT_COS_SFT \
 		5
+	uint8_t	unused_1;
+	uint16_t	port_svif_info;
+	/*
+	 * This field specifies the source virtual interface of the port being
+	 * queried. Drivers can use this to program port svif field in the
+	 * L2 context table
+	 */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK \
+		UINT32_C(0x7fff)
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_SFT       0
+	/* This field specifies whether port_svif is valid or not */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID \
+		UINT32_C(0x8000)
+	uint8_t	unused_2[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -15322,17 +15703,17 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_DIRECT_ACCESS \
 		UINT32_C(0x1)
 	/*
-	 * When this bit is set to '1', the PTP information is accessible
-	 * via HWRM commands.
-	 */
-	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
-		UINT32_C(0x2)
-	/*
 	 * When this bit is set to '1', the device supports one-step
 	 * Tx timestamping.
 	 */
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_ONE_STEP_TX_TS \
 		UINT32_C(0x4)
+	/*
+	 * When this bit is set to '1', the PTP information is accessible
+	 * via HWRM commands.
+	 */
+	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
+		UINT32_C(0x8)
 	uint8_t	unused_0[3];
 	/* Offset of the PTP register for the lower 32 bits of timestamp for RX. */
 	uint32_t	rx_ts_reg_off_lower;
@@ -15375,7 +15756,7 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics Formats */
+/* Port Tx Statistics Format */
 /* tx_port_stats (size:3264b/408B) */
 struct tx_port_stats {
 	/* Total Number of 64 Bytes frames transmitted */
@@ -15516,7 +15897,7 @@ struct tx_port_stats {
 	uint64_t	tx_stat_error;
 } __attribute__((packed));
 
-/* Port Rx Statistics Formats */
+/* Port Rx Statistics Format */
 /* rx_port_stats (size:4224b/528B) */
 struct rx_port_stats {
 	/* Total Number of 64 Bytes frames received */
@@ -15806,7 +16187,7 @@ struct hwrm_port_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics extended Formats */
+/* Port Tx Statistics extended Format */
 /* tx_port_stats_ext (size:2048b/256B) */
 struct tx_port_stats_ext {
 	/* Total number of tx bytes count on cos queue 0 */
@@ -15875,7 +16256,7 @@ struct tx_port_stats_ext {
 	uint64_t	pfc_pri7_tx_transitions;
 } __attribute__((packed));
 
-/* Port Rx Statistics extended Formats */
+/* Port Rx Statistics extended Format */
 /* rx_port_stats_ext (size:3648b/456B) */
 struct rx_port_stats_ext {
 	/* Number of times link state changed to down */
@@ -15997,6 +16378,424 @@ struct rx_port_stats_ext {
 	uint64_t	rx_discard_packets_cos7;
 } __attribute__((packed));
 
+/*
+ * Port Rx Statistics extended PFC WatchDog Format.
+ * StormDetect and StormRevert event determination is based
+ * on an integration period and a percentage threshold.
+ * StormDetect event - when percentage of XOFF frames receieved
+ * within an integration period exceeds the configured threshold.
+ * StormRevert event - when percentage of XON frames received
+ * within an integration period exceeds the configured threshold.
+ * Actual number of XOFF/XON frames for the events to be triggered
+ * depends on both configured integration period and sampling rate.
+ * The statistics in this structure represent counts of specified
+ * events from the moment the feature (PFC WatchDog) is enabled via
+ * hwrm_queue_pfc_enable_cfg call.
+ */
+/* rx_port_stats_ext_pfc_wd (size:5120b/640B) */
+struct rx_port_stats_ext_pfc_wd {
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri0;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri1;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri2;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri3;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri4;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri5;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri6;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri7;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri0;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri1;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri2;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri3;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri4;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri5;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri6;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri7;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri0;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri1;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri2;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri3;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri4;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri5;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri6;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri7;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri0;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri1;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri2;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri3;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri4;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri5;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri6;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri7;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri0;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri1;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri2;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri3;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri4;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri5;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri6;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri0;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri1;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri2;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri3;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri4;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri5;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri6;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri7;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri0;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri1;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri2;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri3;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri4;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri5;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri6;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri7;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri0;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri1;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri2;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri3;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri4;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri5;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri6;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri7;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri0;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri1;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri2;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri3;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri4;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri5;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri6;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri0;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri1;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri2;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri3;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri4;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri5;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri6;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri7;
+} __attribute__((packed));
+
 /************************
  * hwrm_port_qstats_ext *
  ************************/
@@ -16090,6 +16889,83 @@ struct hwrm_port_qstats_ext_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/*******************************
+ * hwrm_port_qstats_ext_pfc_wd *
+ *******************************/
+
+
+/* hwrm_port_qstats_ext_pfc_wd_input (size:256b/32B) */
+struct hwrm_port_qstats_ext_pfc_wd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Port ID of port that is being queried. */
+	uint16_t	port_id;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * block in bytes
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	unused_0[4];
+	/*
+	 * This is the host address where
+	 * rx_port_stats_ext_pfc_wd will be stored
+	 */
+	uint64_t	pfc_wd_stat_host_addr;
+} __attribute__((packed));
+
+/* hwrm_port_qstats_ext_pfc_wd_output (size:128b/16B) */
+struct hwrm_port_qstats_ext_pfc_wd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * statistics block in bytes.
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	flags;
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+	uint8_t	unused_0[4];
+} __attribute__((packed));
+
 /*************************
  * hwrm_port_lpbk_qstats *
  *************************/
@@ -16168,6 +17044,91 @@ struct hwrm_port_lpbk_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/************************
+ * hwrm_port_ecn_qstats *
+ ************************/
+
+
+/* hwrm_port_ecn_qstats_input (size:192b/24B) */
+struct hwrm_port_ecn_qstats_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Port ID of port that is being queried. Unused if NIC is in
+	 * multi-host mode.
+	 */
+	uint16_t	port_id;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_port_ecn_qstats_output (size:384b/48B) */
+struct hwrm_port_ecn_qstats_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of packets marked in CoS queue 0. */
+	uint32_t	mark_cnt_cos0;
+	/* Number of packets marked in CoS queue 1. */
+	uint32_t	mark_cnt_cos1;
+	/* Number of packets marked in CoS queue 2. */
+	uint32_t	mark_cnt_cos2;
+	/* Number of packets marked in CoS queue 3. */
+	uint32_t	mark_cnt_cos3;
+	/* Number of packets marked in CoS queue 4. */
+	uint32_t	mark_cnt_cos4;
+	/* Number of packets marked in CoS queue 5. */
+	uint32_t	mark_cnt_cos5;
+	/* Number of packets marked in CoS queue 6. */
+	uint32_t	mark_cnt_cos6;
+	/* Number of packets marked in CoS queue 7. */
+	uint32_t	mark_cnt_cos7;
+	/*
+	 * Bitmask that indicates which CoS queues have ECN marking enabled.
+	 * Bit i corresponds to CoS queue i.
+	 */
+	uint8_t	mark_en;
+	uint8_t	unused_0[6];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
 /***********************
  * hwrm_port_clr_stats *
  ***********************/
@@ -18322,7 +19283,7 @@ struct hwrm_port_phy_mdio_bus_acquire_input {
 	 * Timeout in milli seconds, MDIO BUS will be released automatically
 	 * after this time, if another mdio acquire command is not received
 	 * within the timeout window from the same client.
-	 * A 0xFFFF will hold the bus until this bus is released.
+	 * A 0xFFFF will hold the bus untill this bus is released.
 	 */
 	uint16_t	mdio_bus_timeout;
 	uint8_t	unused_0[2];
@@ -19158,6 +20119,30 @@ struct hwrm_queue_pfcenable_qcfg_output {
 	/* If set to 1, then PFC is enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	uint8_t	unused_0[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -19229,6 +20214,30 @@ struct hwrm_queue_pfcenable_cfg_input {
 	/* If set to 1, then PFC is requested to be enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	/*
 	 * Port ID of port for which the table is being configured.
 	 * The HWRM needs to check whether this function is allowed
@@ -31831,15 +32840,2172 @@ struct hwrm_cfa_eem_qcfg_input {
 	 */
 	uint64_t	resp_addr;
 	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint32_t	unused_0;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint32_t	unused_0;
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
+struct hwrm_cfa_eem_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
+		UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
+		UINT32_C(0x2)
+	/* When set to 1, all offloaded flows will be sent to EEM. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x4)
+	/* The number of entries the FW has configured for EEM. */
+	uint32_t	num_entries;
+	/* Configured EEM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EEM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EEM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	uint8_t	unused_2[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*******************
+ * hwrm_cfa_eem_op *
+ *******************/
+
+
+/* hwrm_cfa_eem_op_input (size:192b/24B) */
+struct hwrm_cfa_eem_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the TX flow offload function specified in fid.
+	 * Note if this bit is set then the path_rx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the RX flow offload function specified in fid.
+	 * Note if this bit is set then the path_tx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint16_t	unused_0;
+	/* The number of EEM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
+	/*
+	 * To properly stop EEM and ensure there are no DMA's, the caller
+	 * must disable EEM for the given PF, using this call. This will
+	 * safely disable EEM and ensure that all DMA'ed to the
+	 * keys/records/efc have been completed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EEM host memory has been configured, EEM options have
+	 * been configured. Then the caller should enable EEM for the given
+	 * PF. Note once this call has been made, then the EEM mechanism
+	 * will be active and DMA's will occur as packets are processed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EEM settings for the given PF so that the register values
+	 * are reset back to there initial state.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
+	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
+		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_op_output (size:128b/16B) */
+struct hwrm_cfa_eem_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************************
+ * hwrm_cfa_adv_flow_mgnt_qcaps *
+ ********************************/
+
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	unused_0[4];
+} __attribute__((packed));
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * Value of 1 to indicate firmware support 16-bit flow handle.
+	 * Value of 0 to indicate firmware not support 16-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * Value of 1 to indicate firmware support 64-bit flow handle.
+	 * Value of 0 to indicate firmware not support 64-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
+		UINT32_C(0x2)
+	/*
+	 * Value of 1 to indicate firmware support flow batch delete operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 to indicate that the firmware does not support flow batch delete
+	 * operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * Value of 1 to indicate that the firmware support flow reset all operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 indicates firmware does not support flow reset all operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
+		UINT32_C(0x8)
+	/*
+	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
+	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
+	 * Value of 0 indicates firmware does not support use of FID as dest_id.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
+		UINT32_C(0x10)
+	/*
+	 * Value of 1 to indicate that firmware supports TX EEM flows.
+	 * Value of 0 indicates firmware does not support TX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x20)
+	/*
+	 * Value of 1 to indicate that firmware supports RX EEM flows.
+	 * Value of 0 indicates firmware does not support RX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x40)
+	/*
+	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
+	 * on-chip flow counter which can be used for EEM flows.
+	 * Value of 0 indicates firmware does not support the dynamic allocation of an
+	 * on-chip flow counter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
+		UINT32_C(0x80)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
+	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
+		UINT32_C(0x100)
+	/*
+	 * Value of 1 to indicate that firmware supports untagged matching
+	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
+	 * indicates firmware does not support untagged matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
+		UINT32_C(0x200)
+	/*
+	 * Value of 1 to indicate that firmware supports XDP filter. Value
+	 * of 0 indicates firmware does not support XDP filter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
+		UINT32_C(0x400)
+	/*
+	 * Value of 1 to indicate that the firmware support L2 header source
+	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
+	 * Value of 0 indicates firmware does not support L2 header source
+	 * fields matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
+		UINT32_C(0x800)
+	/*
+	 * If set to 1, firmware is capable of supporting ARP ethertype as
+	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
+	 * RX direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
+		UINT32_C(0x1000)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
+	 * command. Value of 0 indicates firmware does not support
+	 * rfs_ring_tbl_idx in dst_id field.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
+		UINT32_C(0x2000)
+	/*
+	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
+	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
+	 * direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
+		UINT32_C(0x4000)
+	uint8_t	unused_0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************
+ * hwrm_cfa_tflib *
+ ******************/
+
+
+/* hwrm_cfa_tflib_input (size:1024b/128B) */
+struct hwrm_cfa_tflib_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TFLIB request data. */
+	uint32_t	tf_req[26];
+} __attribute__((packed));
+
+/* hwrm_cfa_tflib_output (size:5632b/704B) */
+struct hwrm_cfa_tflib_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* TFLIB response code */
+	uint32_t	tf_resp_code;
+	/* TFLIB response data. */
+	uint32_t	tf_resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********
+ * hwrm_tf *
+ ***********/
+
+
+/* hwrm_tf_input (size:1024b/128B) */
+struct hwrm_tf_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TF request data. */
+	uint32_t	req[26];
+} __attribute__((packed));
+
+/* hwrm_tf_output (size:5632b/704B) */
+struct hwrm_tf_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* TF response code */
+	uint32_t	resp_code;
+	/* TF response data. */
+	uint32_t	resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_version_get *
+ ***********************/
+
+
+/* hwrm_tf_version_get_input (size:128b/16B) */
+struct hwrm_tf_version_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_open_output (size:128b/16B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_session_close *
+ *************************/
+
+
+/* hwrm_tf_session_close_input (size:192b/24B) */
+struct hwrm_tf_session_close_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_close_output (size:128b/16B) */
+struct hwrm_tf_session_close_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_qcfg *
+ ************************/
+
+
+/* hwrm_tf_session_qcfg_input (size:192b/24B) */
+struct hwrm_tf_session_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_qcfg_output (size:128b/16B) */
+struct hwrm_tf_session_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* RX action control settings flags. */
+	uint8_t	rx_act_flags;
+	/*
+	 * A value of 1 in this field indicates that Global Flow ID
+	 * reporting into cfa_code and cfa_metadata is enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_GFID_EN \
+		UINT32_C(0x1)
+	/*
+	 * A value of 1 in this field indicates that both inner and outer
+	 * are stripped and inner tag is passed.
+	 * Enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_VTAG_DLT_BOTH \
+		UINT32_C(0x2)
+	/*
+	 * A value of 1 in this field indicates that the re-use of
+	 * existing tunnel L2 header SMAC is enabled for
+	 * Non-tunnel L2, L2-L3 and IP-IP tunnel.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_TECT_SMAC_OVR_RUTNSL2 \
+		UINT32_C(0x4)
+	/* TX Action control settings flags. */
+	uint8_t	tx_act_flags;
+	/* Disabled. */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_ABCR_VEB_EN \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1 any GRE tunnels will include the
+	 * optional Key field.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_GRE_SET_K \
+		UINT32_C(0x2)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV6 Traffic Class (TC)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap
+	 * record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV6_TC_IH \
+		UINT32_C(0x4)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV4 Type Of Service (TOS)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV4_TOS_IH \
+		UINT32_C(0x8)
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_qcaps *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_qcaps_input (size:256b/32B) */
+struct hwrm_tf_session_resc_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * buffer. The size should be set to the Resource Manager
+	 * provided max qcaps value that is device specific. This is
+	 * the max size possible.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the qcaps output data
+	 * array. Array is of tf_rm_cap type and is device specific.
+	 */
+	uint64_t	qcaps_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_qcaps_output (size:192b/24B) */
+struct hwrm_tf_session_resc_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Session reservation strategy. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+		0
+	/* Static partitioning. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+		UINT32_C(0x0)
+	/* Strategy 1. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+		UINT32_C(0x1)
+	/* Strategy 2. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+		UINT32_C(0x2)
+	/* Strategy 3. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	/*
+	 * Size of the returned tf_rm_cap data array. The value
+	 * cannot exceed the size defined by the input msg. The data
+	 * array is returned using the qcaps_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
+	/* unused. */
+	uint16_t	unused0;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_alloc *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+struct hwrm_tf_session_resc_alloc_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided num_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the num input data array
+	 * buffer. Array is of tf_rm_num type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	num_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
+struct hwrm_tf_session_resc_alloc_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*****************************
+ * hwrm_tf_session_resc_free *
+ *****************************/
+
+
+/* hwrm_tf_session_resc_free_input (size:256b/32B) */
+struct hwrm_tf_session_resc_free_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided free_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the free input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size field of this message.
+	 */
+	uint64_t	free_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_free_output (size:128b/16B) */
+struct hwrm_tf_session_resc_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_flush *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_flush_input (size:256b/32B) */
+struct hwrm_tf_session_resc_flush_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided flush_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the flush input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	flush_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_flush_output (size:128b/16B) */
+struct hwrm_tf_session_resc_flush_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/* TruFlow RM capability of a resource. */
+/* tf_rm_cap (size:64b/8B) */
+struct tf_rm_cap {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Minimum value. */
+	uint16_t	min;
+	/* Maximum value. */
+	uint16_t	max;
+} __attribute__((packed));
+
+/* TruFlow RM number of a resource. */
+/* tf_rm_num (size:64b/8B) */
+struct tf_rm_num {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of resources. */
+	uint32_t	num;
+} __attribute__((packed));
+
+/* TruFlow RM reservation information. */
+/* tf_rm_res (size:64b/8B) */
+struct tf_rm_res {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Start offset. */
+	uint16_t	start;
+	/* Number of resources. */
+	uint16_t	stride;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_get *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_get_input (size:256b/32B) */
+struct hwrm_tf_tbl_type_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_get_output (size:1216b/152B) */
+struct hwrm_tf_tbl_type_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Response code. */
+	uint32_t	resp_code;
+	/* Response size. */
+	uint16_t	size;
+	/* unused */
+	uint16_t	unused0;
+	/* Response data. */
+	uint8_t	data[128];
+	/* unused */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_set *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_set_input (size:1024b/128B) */
+struct hwrm_tf_tbl_type_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+	/* Size of the data to set. */
+	uint16_t	size;
+	/* unused */
+	uint8_t	unused1[6];
+	/* Data to be set. */
+	uint8_t	data[88];
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_set_output (size:128b/16B) */
+struct hwrm_tf_tbl_type_set_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_ctxt_mem_rgtr *
+ *************************/
+
+
+/* hwrm_tf_ctxt_mem_rgtr_input (size:256b/32B) */
+struct hwrm_tf_ctxt_mem_rgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Counter PBL indirect levels. */
+	uint8_t	page_level;
+	/* PBL pointer is physical start address. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_0 UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_1 UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing
+	 * to PTE tables.
+	 */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2 UINT32_C(0x2)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2
+	/* Page size. */
+	uint8_t	page_size;
+	/* 4KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K   UINT32_C(0x0)
+	/* 8KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K   UINT32_C(0x1)
+	/* 64KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K  UINT32_C(0x4)
+	/* 256KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K UINT32_C(0x6)
+	/* 1MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M   UINT32_C(0x8)
+	/* 2MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M   UINT32_C(0x9)
+	/* 4MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M   UINT32_C(0xa)
+	/* 1GB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G   UINT32_C(0x12)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+	/* unused. */
+	uint32_t	unused0;
+	/* Pointer to the PBL, or PDL depending on number of levels */
+	uint64_t	page_dir;
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_rgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_rgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***************************
+ * hwrm_tf_ctxt_mem_unrgtr *
+ ***************************/
+
+
+/* hwrm_tf_ctxt_mem_unrgtr_input (size:192b/24B) */
+struct hwrm_tf_ctxt_mem_unrgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[6];
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_unrgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_unrgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_ext_em_qcaps *
+ ************************/
+
+
+/* hwrm_tf_ext_em_qcaps_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcaps_output (size:320b/40B) */
+struct hwrm_tf_ext_em_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the the FW supports the Centralized
+	 * Memory Model. The concept designates one entity for the
+	 * memory allocation while all others ‘subscribe’ to it.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the the FW supports the Detached
+	 * Centralized Memory Model. The memory is allocated and managed
+	 * as a separate entity. All PFs and VFs will be granted direct
+	 * or semi-direct access to the allocated memory while none of
+	 * which can interfere with the management of the memory.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_DETACHED_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+	/* Support flags. */
+	uint32_t	supported;
+	/*
+	 * If set to 1, then EXT EM KEY0 table is supported using
+	 * crc32 hash.
+	 * If set to 0, EXT EM KEY0 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY0_TABLE \
+		UINT32_C(0x1)
+	/*
+	 * If set to 1, then EXT EM KEY1 table is supported using
+	 * lookup3 hash.
+	 * If set to 0, EXT EM KEY1 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY1_TABLE \
+		UINT32_C(0x2)
+	/*
+	 * If set to 1, then EXT EM External Record table is supported.
+	 * If set to 0, EXT EM External Record table is not
+	 * supported.  (This table includes action record, EFC
+	 * pointers, encap pointers)
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_RECORD_TABLE \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then EXT EM External Flow Counters table is
+	 * supported.
+	 * If set to 0, EXT EM External Flow Counters table is not
+	 * supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_FLOW_COUNTERS_TABLE \
+		UINT32_C(0x8)
+	/*
+	 * If set to 1, then FID table used for implicit flow flush
+	 * is supported.
+	 * If set to 0, then FID table used for implicit flow flush
+	 * is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_FID_TABLE \
+		UINT32_C(0x10)
+	/*
+	 * The maximum number of entries supported by EXT EM. When
+	 * configuring the host memory the number of numbers of
+	 * entries that can supported are -
+	 *      32k, 64k 128k, 256k, 512k, 1M, 2M, 4M, 8M, 32M, 64M,
+	 *      128M entries.
+	 * Any value that are not these values, the FW will round
+	 * down to the closest support number of entries.
+	 */
+	uint32_t	max_entries_supported;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM
+	 * KEY0/KEY1 tables.
+	 */
+	uint16_t	key_entry_size;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM RECORD
+	 * tables.
+	 */
+	uint16_t	record_entry_size;
+	/* The entry size in bytes of each entry in the EXT EM EFC tables. */
+	uint16_t	efc_entry_size;
+	/* The FID size in bytes of each entry in the EXT EM FID tables. */
+	uint16_t	fid_entry_size;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*********************
+ * hwrm_tf_ext_em_op *
+ *********************/
+
+
+/* hwrm_tf_ext_em_op_input (size:192b/24B) */
+struct hwrm_tf_ext_em_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint16_t	unused0;
+	/* The number of EXT EM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_RESERVED       UINT32_C(0x0)
+	/*
+	 * To properly stop EXT EM and ensure there are no DMA's,
+	 * the caller must disable EXT EM for the given PF, using
+	 * this call. This will safely disable EXT EM and ensure
+	 * that all DMA'ed to the keys/records/efc have been
+	 * completed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EXT EM host memory has been configured, EXT EM
+	 * options have been configured. Then the caller should
+	 * enable EXT EM for the given PF. Note once this call has
+	 * been made, then the EXT EM mechanism will be active and
+	 * DMA's will occur as packets are processed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EXT EM settings for the given PF so that the
+	 * register values are reset back to their initial state.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP UINT32_C(0x3)
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP
+	/* unused. */
+	uint16_t	unused1;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_op_output (size:128b/16B) */
+struct hwrm_tf_ext_em_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_tf_ext_em_cfg *
+ **********************/
+
+
+/* hwrm_tf_ext_em_cfg_input (size:384b/48B) */
+struct hwrm_tf_ext_em_cfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* When set to 1, secondary, 0 means primary. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_SECONDARY_PF \
+		UINT32_C(0x4)
+	/*
+	 * Group_id which used by Firmware to identify memory pools belonging
+	 * to certain group.
+	 */
+	uint16_t	group_id;
+	/*
+	 * Dynamically reconfigure EEM pending cache every 1/10th of second.
+	 * If set to 0 it will disable the EEM HW flush of the pending cache.
+	 */
+	uint8_t	flush_interval;
+	/* unused. */
+	uint8_t	unused0;
+	/*
+	 * Configured EXT EM with the given number of entries. All
+	 * the EXT EM tables KEY0, KEY1, RECORD, EFC all have the
+	 * same number of entries and all tables will be configured
+	 * using this value. Current minimum value is 32k. Current
+	 * maximum value is 128M.
+	 */
+	uint32_t	num_entries;
+	/* unused. */
+	uint32_t	unused1;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint16_t	unused2;
+	/* unused. */
+	uint32_t	unused3;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_cfg_output (size:128b/16B) */
+struct hwrm_tf_ext_em_cfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_ext_em_qcfg *
+ ***********************/
+
+
+/* hwrm_tf_ext_em_qcfg_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcfg_output (size:256b/32B) */
+struct hwrm_tf_ext_em_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* The number of entries the FW has configured for EXT EM. */
+	uint32_t	num_entries;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************
+ * hwrm_tf_tcam_set *
+ ********************/
+
+
+/* hwrm_tf_tcam_set_input (size:1024b/128B) */
+struct hwrm_tf_tcam_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX
+	/*
+	 * Indicate device data is being sent via DMA, the device
+	 * data is packing does not change.
+	 */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA     UINT32_C(0x2)
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of TCAM entry. */
+	uint16_t	idx;
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM result. */
+	uint8_t	result_size;
+	/*
+	 * Offset from which the mask bytes start in the device data
+	 * array, key offset is always 0.
+	 */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[6];
+	/*
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[88];
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
-struct hwrm_cfa_eem_qcfg_output {
+/* hwrm_tf_tcam_set_output (size:128b/16B) */
+struct hwrm_tf_tcam_set_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31848,46 +35014,26 @@ struct hwrm_cfa_eem_qcfg_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
-		UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
-		UINT32_C(0x2)
-	/* When set to 1, all offloaded flows will be sent to EEM. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
-		UINT32_C(0x4)
-	/* The number of entries the FW has configured for EEM. */
-	uint32_t	num_entries;
-	/* Configured EEM with the given context if for KEY0 table. */
-	uint16_t	key0_ctx_id;
-	/* Configured EEM with the given context if for KEY1 table. */
-	uint16_t	key1_ctx_id;
-	/* Configured EEM with the given context if for RECORD table. */
-	uint16_t	record_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	efc_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	fid_ctx_id;
-	uint8_t	unused_2[5];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/*******************
- * hwrm_cfa_eem_op *
- *******************/
+/********************
+ * hwrm_tf_tcam_get *
+ ********************/
 
 
-/* hwrm_cfa_eem_op_input (size:192b/24B) */
-struct hwrm_cfa_eem_op_input {
+/* hwrm_tf_tcam_get_input (size:256b/32B) */
+struct hwrm_tf_tcam_get_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -31916,49 +35062,31 @@ struct hwrm_cfa_eem_op_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
 	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX
 	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the TX flow offload function specified in fid.
-	 * Note if this bit is set then the path_rx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the RX flow offload function specified in fid.
-	 * Note if this bit is set then the path_tx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint16_t	unused_0;
-	/* The number of EEM key table entries to be configured. */
-	uint16_t	op;
-	/* This value is reserved and should not be used. */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
-	/*
-	 * To properly stop EEM and ensure there are no DMA's, the caller
-	 * must disable EEM for the given PF, using this call. This will
-	 * safely disable EEM and ensure that all DMA'ed to the
-	 * keys/records/efc have been completed.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
-	/*
-	 * Once the EEM host memory has been configured, EEM options have
-	 * been configured. Then the caller should enable EEM for the given
-	 * PF. Note once this call has been made, then the EEM mechanism
-	 * will be active and DMA's will occur as packets are processed.
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
 	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
-	/*
-	 * Clear EEM settings for the given PF so that the register values
-	 * are reset back to there initial state.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
-	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
-		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+	uint32_t	type;
+	/* Index of a TCAM entry. */
+	uint16_t	idx;
+	/* unused. */
+	uint16_t	unused0;
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_op_output (size:128b/16B) */
-struct hwrm_cfa_eem_op_output {
+/* hwrm_tf_tcam_get_output (size:2368b/296B) */
+struct hwrm_tf_tcam_get_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31967,24 +35095,41 @@ struct hwrm_cfa_eem_op_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM entry. */
+	uint8_t	result_size;
+	/* Offset from which the mask bytes start in the device data array. */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[4];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[272];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/********************************
- * hwrm_cfa_adv_flow_mgnt_qcaps *
- ********************************/
+/*********************
+ * hwrm_tf_tcam_move *
+ *********************/
 
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+/* hwrm_tf_tcam_move_input (size:1024b/128B) */
+struct hwrm_tf_tcam_move_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32013,11 +35158,33 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	uint32_t	unused_0[4];
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index pairs to be swapped for the device. */
+	uint16_t	count;
+	/* unused. */
+	uint16_t	unused0;
+	/* TCAM index pairs to be swapped for the device. */
+	uint16_t	idx_pairs[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+/* hwrm_tf_tcam_move_output (size:128b/16B) */
+struct hwrm_tf_tcam_move_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32026,131 +35193,26 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/*
-	 * Value of 1 to indicate firmware support 16-bit flow handle.
-	 * Value of 0 to indicate firmware not support 16-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
-		UINT32_C(0x1)
-	/*
-	 * Value of 1 to indicate firmware support 64-bit flow handle.
-	 * Value of 0 to indicate firmware not support 64-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
-		UINT32_C(0x2)
-	/*
-	 * Value of 1 to indicate firmware support flow batch delete operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 to indicate that the firmware does not support flow batch delete
-	 * operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
-		UINT32_C(0x4)
-	/*
-	 * Value of 1 to indicate that the firmware support flow reset all operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 indicates firmware does not support flow reset all operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
-		UINT32_C(0x8)
-	/*
-	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
-	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
-	 * Value of 0 indicates firmware does not support use of FID as dest_id.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
-		UINT32_C(0x10)
-	/*
-	 * Value of 1 to indicate that firmware supports TX EEM flows.
-	 * Value of 0 indicates firmware does not support TX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x20)
-	/*
-	 * Value of 1 to indicate that firmware supports RX EEM flows.
-	 * Value of 0 indicates firmware does not support RX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x40)
-	/*
-	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
-	 * on-chip flow counter which can be used for EEM flows.
-	 * Value of 0 indicates firmware does not support the dynamic allocation of an
-	 * on-chip flow counter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
-		UINT32_C(0x80)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
-	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
-		UINT32_C(0x100)
-	/*
-	 * Value of 1 to indicate that firmware supports untagged matching
-	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
-	 * indicates firmware does not support untagged matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
-		UINT32_C(0x200)
-	/*
-	 * Value of 1 to indicate that firmware supports XDP filter. Value
-	 * of 0 indicates firmware does not support XDP filter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
-		UINT32_C(0x400)
-	/*
-	 * Value of 1 to indicate that the firmware support L2 header source
-	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
-	 * Value of 0 indicates firmware does not support L2 header source
-	 * fields matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
-		UINT32_C(0x800)
-	/*
-	 * If set to 1, firmware is capable of supporting ARP ethertype as
-	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
-	 * RX direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
-		UINT32_C(0x1000)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
-	 * command. Value of 0 indicates firmware does not support
-	 * rfs_ring_tbl_idx in dst_id field.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
-		UINT32_C(0x2000)
-	/*
-	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
-	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
-	 * direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
-		UINT32_C(0x4000)
-	uint8_t	unused_0[3];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/******************
- * hwrm_cfa_tflib *
- ******************/
+/*********************
+ * hwrm_tf_tcam_free *
+ *********************/
 
 
-/* hwrm_cfa_tflib_input (size:1024b/128B) */
-struct hwrm_cfa_tflib_input {
+/* hwrm_tf_tcam_free_input (size:1024b/128B) */
+struct hwrm_tf_tcam_free_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32179,18 +35241,33 @@ struct hwrm_cfa_tflib_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index to be deleted for the device. */
+	uint16_t	count;
 	/* unused. */
-	uint8_t	unused0[4];
-	/* TFLIB request data. */
-	uint32_t	tf_req[26];
+	uint16_t	unused0;
+	/* TCAM index list to be deleted for the device. */
+	uint16_t	idx_list[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_tflib_output (size:5632b/704B) */
-struct hwrm_cfa_tflib_output {
+/* hwrm_tf_tcam_free_output (size:128b/16B) */
+struct hwrm_tf_tcam_free_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32199,22 +35276,15 @@ struct hwrm_cfa_tflib_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
-	/* TFLIB response code */
-	uint32_t	tf_resp_code;
-	/* TFLIB response data. */
-	uint32_t	tf_resp[170];
 	/* unused. */
-	uint8_t	unused1[7];
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
@@ -33155,9 +36225,9 @@ struct pcie_ctx_hw_stats {
 	uint64_t	pcie_tl_signal_integrity;
 	/* Number of times LTSSM entered Recovery state */
 	uint64_t	pcie_link_integrity;
-	/* Number of TLP bytes that have been transmitted */
+	/* Report number of TLP bits that have been transmitted in Mbps */
 	uint64_t	pcie_tx_traffic_rate;
-	/* Number of TLP bytes that have been received */
+	/* Report number of TLP bits that have been received in Mbps */
 	uint64_t	pcie_rx_traffic_rate;
 	/* Number of DLLP bytes that have been transmitted */
 	uint64_t	pcie_tx_dllp_statistics;
@@ -33981,7 +37051,23 @@ struct hwrm_nvm_modify_input {
 	uint64_t	host_src_addr;
 	/* 16-bit directory entry index. */
 	uint16_t	dir_idx;
-	uint8_t	unused_0[2];
+	uint16_t	flags;
+	/*
+	 * This flag indicates the sender wants to modify a continuous NVRAM
+	 * area using a batch of this HWRM requests. The offset of a request
+	 * must be continuous to the end of previous request's. Firmware does
+	 * not update the directory entry until receiving the last request,
+	 * which is indicated by the batch_last flag.
+	 * This flag is set usually when a sender does not have a block of
+	 * memory that is big enough to hold the entire NVRAM data for send
+	 * at one time.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_MODE     UINT32_C(0x1)
+	/*
+	 * This flag can be used only when the batch_mode flag is set.
+	 * It indicates this request is the last of batch requests.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_LAST     UINT32_C(0x2)
 	/* 32-bit NVRAM byte-offset to modify content from. */
 	uint32_t	offset;
 	/*
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 02/34] net/bnxt: update hwrm prep to use ptr
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
  2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
@ 2020-04-14  8:12     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
                       ` (32 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:12 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher

From: Randy Schacher <stuart.schacher@broadcom.com>

- Change HWRM_PREP to use pointer and use the full
  HWRM enum

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   2 +-
 drivers/net/bnxt/bnxt_hwrm.c | 202 ++++++++++++++++++++++---------------------
 2 files changed, 103 insertions(+), 101 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3ae08a2..b795ed6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -594,7 +594,7 @@ struct bnxt {
 
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
 
-	uint16_t			hwrm_cmd_seq;
+	uint16_t			chimp_cmd_seq;
 	uint16_t			kong_cmd_seq;
 	void				*hwrm_cmd_resp_addr;
 	rte_iova_t			hwrm_cmd_resp_dma_addr;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index a9c9c72..93b2ea7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -182,19 +182,19 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
  *
  * HWRM_UNLOCK() must be called after all response processing is completed.
  */
-#define HWRM_PREP(req, type, kong) do { \
+#define HWRM_PREP(req, type, kong) do {	\
 	rte_spinlock_lock(&bp->hwrm_lock); \
 	if (bp->hwrm_cmd_resp_addr == NULL) { \
 		rte_spinlock_unlock(&bp->hwrm_lock); \
 		return -EACCES; \
 	} \
 	memset(bp->hwrm_cmd_resp_addr, 0, bp->max_resp_len); \
-	req.req_type = rte_cpu_to_le_16(HWRM_##type); \
-	req.cmpl_ring = rte_cpu_to_le_16(-1); \
-	req.seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
-		rte_cpu_to_le_16(bp->hwrm_cmd_seq++); \
-	req.target_id = rte_cpu_to_le_16(0xffff); \
-	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
+	(req)->req_type = rte_cpu_to_le_16(type); \
+	(req)->cmpl_ring = rte_cpu_to_le_16(-1); \
+	(req)->seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
+		rte_cpu_to_le_16(bp->chimp_cmd_seq++); \
+	(req)->target_id = rte_cpu_to_le_16(0xffff); \
+	(req)->resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
 } while (0)
 
 #define HWRM_CHECK_RESULT_SILENT() do {\
@@ -263,7 +263,7 @@ int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_cfa_l2_set_rx_mask_input req = {.req_type = 0 };
 	struct hwrm_cfa_l2_set_rx_mask_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.mask = 0;
 
@@ -288,7 +288,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 	if (vnic->fw_vnic_id == INVALID_HW_RING_ID)
 		return rc;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
 	if (vnic->flags & BNXT_VNIC_INFO_BCAST)
@@ -347,7 +347,7 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 				return 0;
 		}
 	}
-	HWRM_PREP(req, CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(fid);
 
 	req.vlan_tag_mask_tbl_addr =
@@ -389,7 +389,7 @@ int bnxt_hwrm_clear_l2_filter(struct bnxt *bp,
 	if (l2_filter->l2_ref_cnt > 0)
 		return 0;
 
-	HWRM_PREP(req, CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
 
@@ -440,7 +440,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	if (filter->fw_l2_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_l2_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -503,7 +503,7 @@ int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (ptp->rx_filter)
 		flags |= HWRM_PORT_MAC_CFG_INPUT_FLAGS_PTP_RX_TS_CAPTURE_ENABLE;
@@ -536,7 +536,7 @@ static int bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
 	if (ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(bp->pf.port_id);
 
@@ -591,7 +591,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	uint32_t flags;
 	int i;
 
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 
@@ -721,7 +721,7 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp)
 	struct hwrm_vnic_qcaps_input req = {.req_type = 0 };
 	struct hwrm_vnic_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.target_id = rte_cpu_to_le_16(0xffff);
 
@@ -748,7 +748,7 @@ int bnxt_hwrm_func_reset(struct bnxt *bp)
 	struct hwrm_func_reset_input req = {.req_type = 0 };
 	struct hwrm_func_reset_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_RESET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESET, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(0);
 
@@ -781,7 +781,7 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 	if ((BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) && !BNXT_STINGRAY(bp))
 		flags |= HWRM_FUNC_DRV_RGTR_INPUT_FLAGS_MASTER_SUPPORT;
 
-	HWRM_PREP(req, FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER |
 			HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD);
 	req.ver_maj = RTE_VER_YEAR;
@@ -853,7 +853,7 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
 	struct hwrm_func_vf_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_vf_cfg_input req = {0};
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	enables = HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_RX_RINGS  |
 		  HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_TX_RINGS   |
@@ -919,7 +919,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 	struct hwrm_func_resource_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_resource_qcaps_input req = {0};
 
-	HWRM_PREP(req, FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -964,7 +964,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp, uint32_t timeout)
 
 	bp->max_req_len = HWRM_MAX_REQ_LEN;
 	bp->hwrm_cmd_timeout = timeout;
-	HWRM_PREP(req, VER_GET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VER_GET, BNXT_USE_CHIMP_MB);
 
 	req.hwrm_intf_maj = HWRM_VERSION_MAJOR;
 	req.hwrm_intf_min = HWRM_VERSION_MINOR;
@@ -1104,7 +1104,7 @@ int bnxt_hwrm_func_driver_unregister(struct bnxt *bp, uint32_t flags)
 	if (!(bp->flags & BNXT_FLAG_REGISTERED))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
 	req.flags = flags;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -1122,7 +1122,7 @@ static int bnxt_hwrm_port_phy_cfg(struct bnxt *bp, struct bnxt_link_info *conf)
 	struct hwrm_port_phy_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint32_t enables = 0;
 
-	HWRM_PREP(req, PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
 
 	if (conf->link_up) {
 		/* Setting Fixed Speed. But AutoNeg is ON, So disable it */
@@ -1186,7 +1186,7 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	struct hwrm_port_phy_qcfg_input req = {0};
 	struct hwrm_port_phy_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -1265,7 +1265,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	int i;
 
 get_rx_info:
-	HWRM_PREP(req, QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(dir);
 	/* HWRM Version >= 1.9.1 only if COS Classification is not required. */
@@ -1353,7 +1353,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 	struct rte_mempool *mb_pool;
 	uint16_t rx_buf_size;
 
-	HWRM_PREP(req, RING_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.page_tbl_addr = rte_cpu_to_le_64(ring->bd_dma);
 	req.fbo = rte_cpu_to_le_32(0);
@@ -1477,7 +1477,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 	struct hwrm_ring_free_input req = {.req_type = 0 };
 	struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_type = ring_type;
 	req.ring_id = rte_cpu_to_le_16(ring->fw_ring_id);
@@ -1525,7 +1525,7 @@ int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_alloc_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.cr = rte_cpu_to_le_16(bp->grp_info[idx].cp_fw_ring_id);
 	req.rr = rte_cpu_to_le_16(bp->grp_info[idx].rx_fw_ring_id);
@@ -1549,7 +1549,7 @@ int bnxt_hwrm_ring_grp_free(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_free_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_group_id = rte_cpu_to_le_16(bp->grp_info[idx].fw_grp_id);
 
@@ -1571,7 +1571,7 @@ int bnxt_hwrm_stat_clear(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 	if (cpr->hw_stats_ctx_id == (uint32_t)HWRM_NA_SIGNATURE)
 		return rc;
 
-	HWRM_PREP(req, STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1590,7 +1590,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_alloc_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.update_period_ms = rte_cpu_to_le_32(0);
 
@@ -1614,7 +1614,7 @@ int bnxt_hwrm_stat_ctx_free(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_free_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1648,7 +1648,7 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 skip_ring_grps:
 	vnic->mru = BNXT_VNIC_MRU(bp->eth_dev->data->mtu);
-	HWRM_PREP(req, VNIC_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_ALLOC, BNXT_USE_CHIMP_MB);
 
 	if (vnic->func_default)
 		req.flags =
@@ -1671,7 +1671,7 @@ static int bnxt_hwrm_vnic_plcmodes_qcfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_qcfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1704,7 +1704,7 @@ static int bnxt_hwrm_vnic_plcmodes_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.flags = rte_cpu_to_le_32(pmode->flags);
@@ -1743,7 +1743,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	if (rc)
 		return rc;
 
-	HWRM_PREP(req, VNIC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (BNXT_CHIP_THOR(bp)) {
 		int dflt_rxq = vnic->start_grp_id;
@@ -1847,7 +1847,7 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		PMD_DRV_LOG(DEBUG, "VNIC QCFG ID %d\n", vnic->fw_vnic_id);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_VNIC_QCFG_INPUT_ENABLES_VF_ID_VALID);
@@ -1890,7 +1890,7 @@ int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp,
 	struct hwrm_vnic_rss_cos_lb_ctx_alloc_output *resp =
 						bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -1919,7 +1919,7 @@ int _bnxt_hwrm_vnic_ctx_free(struct bnxt *bp,
 		PMD_DRV_LOG(DEBUG, "VNIC RSS Rule %x\n", vnic->rss_rule);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(ctx_idx);
 
@@ -1964,7 +1964,7 @@ int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_FREE, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1991,7 +1991,7 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
 	for (i = 0; i < nr_ctxs; i++) {
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -2029,7 +2029,7 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 	if (BNXT_CHIP_THOR(bp))
 		return bnxt_hwrm_vnic_rss_cfg_thor(bp, vnic);
 
-	HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 	req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
 	req.hash_mode_flags = vnic->hash_mode;
@@ -2062,7 +2062,7 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(
 			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT);
@@ -2103,7 +2103,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
 
 	if (enable) {
 		req.enables = rte_cpu_to_le_32(
@@ -2143,7 +2143,7 @@ int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, const uint8_t *mac_addr)
 	memcpy(req.dflt_mac_addr, mac_addr, sizeof(req.dflt_mac_addr));
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -2161,7 +2161,7 @@ int bnxt_hwrm_func_qstats_tx_drop(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2184,7 +2184,7 @@ int bnxt_hwrm_func_qstats(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2221,7 +2221,7 @@ int bnxt_hwrm_func_clr_stats(struct bnxt *bp, uint16_t fid)
 	struct hwrm_func_clr_stats_input req = {.req_type = 0};
 	struct hwrm_func_clr_stats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2928,7 +2928,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	uint16_t flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3037,7 +3037,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, int tx_rings)
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(enables);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3109,7 +3109,7 @@ static int reserve_resources_from_vf(struct bnxt *bp,
 	int rc;
 
 	/* Get the actual allocated values now */
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3147,7 +3147,7 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
 	int rc;
 
 	/* Check for zero MAC address */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3165,7 +3165,7 @@ static int update_pf_resource_max(struct bnxt *bp)
 	int rc;
 
 	/* And copy the allocated numbers into the pf struct */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3268,7 +3268,7 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
 	for (i = 0; i < num_vfs; i++) {
 		add_random_mac_if_needed(bp, &req, i);
 
-		HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 		req.flags = rte_cpu_to_le_32(bp->pf.vf_info[i].func_cfg_flags);
 		req.fid = rte_cpu_to_le_16(bp->pf.vf_info[i].fid);
 		rc = bnxt_hwrm_send_message(bp,
@@ -3324,7 +3324,7 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_EVB_MODE);
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_val = port;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3375,7 +3375,7 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_free_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
 
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_id = rte_cpu_to_be_16(port);
@@ -3394,7 +3394,7 @@ int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.flags = rte_cpu_to_le_32(flags);
@@ -3424,7 +3424,7 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
 	struct hwrm_func_buf_rgtr_input req = {.req_type = 0 };
 	struct hwrm_func_buf_rgtr_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
 
 	req.req_buf_num_pages = rte_cpu_to_le_16(1);
 	req.req_buf_page_size = rte_cpu_to_le_16(
@@ -3455,7 +3455,7 @@ int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp)
 	if (!(BNXT_PF(bp) && bp->pdev->max_vfs))
 		return 0;
 
-	HWRM_PREP(req, FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3471,7 +3471,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags);
@@ -3493,7 +3493,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_vf_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
@@ -3515,7 +3515,7 @@ int bnxt_hwrm_set_default_vlan(struct bnxt *bp, int vf, uint8_t is_vf)
 	uint32_t func_cfg_flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (is_vf) {
 		dflt_vlan = bp->pf.vf_info[vf].dflt_vlan;
@@ -3547,7 +3547,7 @@ int bnxt_hwrm_func_bw_cfg(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(enables);
@@ -3567,7 +3567,7 @@ int bnxt_hwrm_set_vf_vlan(struct bnxt *bp, int vf)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
@@ -3604,7 +3604,7 @@ int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3624,7 +3624,7 @@ int bnxt_hwrm_func_qcfg_vf_default_mac(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3648,7 +3648,7 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3668,7 +3668,7 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 	struct hwrm_stat_ctx_query_input req = {.req_type = 0};
 	struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cid);
 
@@ -3706,7 +3706,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp)
 	struct bnxt_pf_info *pf = &bp->pf;
 	int rc;
 
-	HWRM_PREP(req, PORT_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	req.tx_stat_host_addr = rte_cpu_to_le_64(bp->hw_tx_port_stats_map);
@@ -3731,7 +3731,7 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	    BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3751,7 +3751,7 @@ int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 	if (BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
 	req.port_id = bp->pf.port_id;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3793,7 +3793,7 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	if (!bp->num_leds || BNXT_VF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, PORT_LED_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_CFG, BNXT_USE_CHIMP_MB);
 
 	if (led_on) {
 		led_state = HWRM_PORT_LED_CFG_INPUT_LED0_STATE_BLINKALT;
@@ -3826,7 +3826,7 @@ int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
 	struct hwrm_nvm_get_dir_info_input req = {0};
 	struct hwrm_nvm_get_dir_info_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3869,7 +3869,7 @@ int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data)
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3903,7 +3903,7 @@ int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_READ, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_READ, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	req.offset = rte_cpu_to_le_32(offset);
@@ -3925,7 +3925,7 @@ int bnxt_hwrm_erase_nvram_directory(struct bnxt *bp, uint8_t index)
 	struct hwrm_nvm_erase_dir_entry_input req = {0};
 	struct hwrm_nvm_erase_dir_entry_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3958,7 +3958,7 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
 	}
 	memcpy(buf, data, data_len);
 
-	HWRM_PREP(req, NVM_WRITE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_WRITE, BNXT_USE_CHIMP_MB);
 
 	req.dir_type = rte_cpu_to_le_16(dir_type);
 	req.dir_ordinal = rte_cpu_to_le_16(dir_ordinal);
@@ -4009,7 +4009,7 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
 	int rc;
 
 	/* First query all VNIC ids */
-	HWRM_PREP(req, FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.vf_id = rte_cpu_to_le_16(bp->pf.first_vf_id + vf);
 	req.max_vnic_id_cnt = rte_cpu_to_le_32(bp->pf.total_vnics);
@@ -4091,7 +4091,7 @@ int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(
@@ -4166,7 +4166,7 @@ int bnxt_hwrm_set_em_filter(struct bnxt *bp,
 	if (filter->fw_em_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_em_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4238,7 +4238,7 @@ int bnxt_hwrm_clear_em_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
 	if (filter->fw_em_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
 
 	req.em_filter_id = rte_cpu_to_le_64(filter->fw_em_filter_id);
 
@@ -4266,7 +4266,7 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_ntuple_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4346,7 +4346,7 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ntuple_filter_id = rte_cpu_to_le_64(filter->fw_ntuple_filter_id);
 
@@ -4377,7 +4377,7 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		struct bnxt_rx_ring_info *rxr;
 		struct bnxt_cp_ring_info *cpr;
 
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -4509,7 +4509,7 @@ static int bnxt_hwrm_set_coal_params_thor(struct bnxt *bp,
 	uint16_t flags;
 	int rc;
 
-	HWRM_PREP(req, RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
 
@@ -4546,7 +4546,9 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req,
+		  HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS,
+		  BNXT_USE_CHIMP_MB);
 	req.ring_id = rte_cpu_to_le_16(ring_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4571,7 +4573,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 	    bp->ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT_SILENT();
 
@@ -4650,7 +4652,7 @@ int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables)
 	if (!ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(enables);
 
 	if (enables & HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_QP) {
@@ -4743,7 +4745,7 @@ int bnxt_hwrm_ext_port_qstats(struct bnxt *bp)
 	      bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS))
 		return 0;
 
-	HWRM_PREP(req, PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	if (bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS) {
@@ -4784,7 +4786,7 @@ bnxt_hwrm_tunnel_redirect(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4803,7 +4805,7 @@ bnxt_hwrm_tunnel_redirect_free(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4821,7 +4823,7 @@ int bnxt_hwrm_tunnel_redirect_query(struct bnxt *bp, uint32_t *type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4842,7 +4844,7 @@ int bnxt_hwrm_tunnel_redirect_info(struct bnxt *bp, uint8_t tun_type,
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	req.tunnel_type = tun_type;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4867,7 +4869,7 @@ int bnxt_hwrm_set_mac(struct bnxt *bp)
 	if (!BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_FUNC_VF_CFG_INPUT_ENABLES_DFLT_MAC_ADDR);
@@ -4900,7 +4902,7 @@ int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
 	if (!up && (bp->flags & BNXT_FLAG_FW_RESET))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
 
 	if (up)
 		req.flags =
@@ -4946,7 +4948,7 @@ int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp)
 		memset(info, 0, sizeof(*info));
 	}
 
-	HWRM_PREP(req, ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -5022,7 +5024,7 @@ int bnxt_hwrm_fw_reset(struct bnxt *bp)
 	if (!BNXT_PF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, FW_RESET, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_FW_RESET, BNXT_USE_KONG(bp));
 
 	req.embedded_proc_type =
 		HWRM_FW_RESET_INPUT_EMBEDDED_PROC_TYPE_CHIP;
@@ -5050,7 +5052,7 @@ int bnxt_hwrm_port_ts_query(struct bnxt *bp, uint8_t path, uint64_t *timestamp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
 
 	switch (path) {
 	case BNXT_PTP_FLAGS_PATH_TX:
@@ -5098,7 +5100,7 @@ int bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(struct bnxt *bp)
 		return 0;
 	}
 
-	HWRM_PREP(req, CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_KONG(bp));
 
 	HWRM_CHECK_RESULT();
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 03/34] net/bnxt: add truflow message handlers
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
  2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
  2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
                       ` (31 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough, Randy Schacher

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add bnxt message functions for truflow APIs

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 83 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h | 18 ++++++++++
 2 files changed, 101 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 93b2ea7..443553b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -257,6 +257,89 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 
 #define HWRM_UNLOCK()		rte_spinlock_unlock(&bp->hwrm_lock)
 
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len)
+{
+	int rc = 0;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+	struct input *req = msg;
+	struct output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(req, msg_type, mailbox);
+
+	rc = bnxt_hwrm_send_message(bp, req, msg_len, mailbox);
+
+	HWRM_CHECK_RESULT();
+
+	if (resp_msg)
+		memcpy(resp_msg, resp, resp_len);
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len)
+{
+	int rc = 0;
+	struct hwrm_cfa_tflib_input req = { .req_type = 0 };
+	struct hwrm_cfa_tflib_output *resp = bp->hwrm_cmd_resp_addr;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+
+	if (msg_len > sizeof(req.tf_req))
+		return -ENOMEM;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(&req, HWRM_TF, mailbox);
+	/* Build request using the user supplied request payload.
+	 * TLV request size is checked at build time against HWRM
+	 * request max size, thus no checking required.
+	 */
+	req.tf_type = tf_type;
+	req.tf_subtype = tf_subtype;
+	memcpy(req.tf_req, msg, msg_len);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), mailbox);
+	HWRM_CHECK_RESULT();
+
+	/* Copy the resp to user provided response buffer */
+	if (response != NULL)
+		/* Post process response data. We need to copy only
+		 * the 'payload' as the HWRM data structure really is
+		 * HWRM header + msg header + payload and the TFLIB
+		 * only provided a payload place holder.
+		 */
+		if (response_len != 0) {
+			memcpy(response,
+			       resp->tf_resp,
+			       response_len);
+		}
+
+	/* Extract the internal tflib response code */
+	*tf_response_code = resp->tf_resp_code;
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 {
 	int rc = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 5eb2ee8..df7aa74 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -69,6 +69,24 @@ HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED
 	bp->rx_cos_queue[x].profile =	\
 		resp->queue_id##x##_service_profile
 
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len);
+
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len);
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp,
 				   struct bnxt_vnic_info *vnic);
 int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 04/34] net/bnxt: add initial tf core session open
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (2 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
                       ` (30 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru

From: Michael Wildt <michael.wildt@broadcom.com>

- Add infrastructure support
- Add tf_core open session support

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/Makefile                |   8 +
 drivers/net/bnxt/bnxt.h                  |   4 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       | 971 +++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c       | 145 +++++
 drivers/net/bnxt/tf_core/tf_core.h       | 347 +++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        |  79 +++
 drivers/net/bnxt/tf_core/tf_msg.h        |  44 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h |  47 ++
 drivers/net/bnxt/tf_core/tf_project.h    |  24 +
 drivers/net/bnxt/tf_core/tf_resources.h  |  46 ++
 drivers/net/bnxt/tf_core/tf_rm.h         |  33 ++
 drivers/net/bnxt/tf_core/tf_session.h    |  85 +++
 drivers/net/bnxt/tf_core/tfp.c           | 163 ++++++
 drivers/net/bnxt/tf_core/tfp.h           | 188 ++++++
 14 files changed, 2184 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index b77532b..8a68059 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -43,6 +43,14 @@ ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+endif
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index b795ed6..a8e57ca 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -21,6 +21,8 @@
 #include "bnxt_cpr.h"
 #include "bnxt_util.h"
 
+#include "tf_core.h"
+
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
 
@@ -679,6 +681,8 @@ struct bnxt {
 /* TCAM and EM should be 16-bit only. Other modes not supported. */
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
+
+	struct tf               tfp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
new file mode 100644
index 0000000..a8a5547
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -0,0 +1,971 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _HWRM_TF_H_
+#define _HWRM_TF_H_
+
+#include "tf_core.h"
+
+typedef enum tf_type {
+	TF_TYPE_TRUFLOW,
+	TF_TYPE_LAST = TF_TYPE_TRUFLOW,
+} tf_type_t;
+
+typedef enum tf_subtype {
+	HWRM_TFT_SESSION_ATTACH = 712,
+	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
+	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
+	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
+	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
+	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
+	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
+	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
+	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
+	HWRM_TFT_TBL_SCOPE_CFG = 731,
+	HWRM_TFT_EM_RULE_INSERT = 739,
+	HWRM_TFT_EM_RULE_DELETE = 740,
+	HWRM_TFT_REG_GET = 821,
+	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_TBL_TYPE_SET = 823,
+	HWRM_TFT_TBL_TYPE_GET = 824,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+} tf_subtype_t;
+
+/* Request and Response compile time checking */
+/* u32_t	tlv_req_value[26]; */
+#define TF_MAX_REQ_SIZE 104
+/* u32_t	tlv_resp_value[170]; */
+#define TF_MAX_RESP_SIZE 680
+#define BUILD_BUG_ON(condition) typedef char p__LINE__[(condition) ? 1 : -1]
+
+/* Use this to allocate/free any kind of
+ * indexes over HWRM and fill the parms pointer
+ */
+#define TF_BULK_RECV	 128
+#define TF_BULK_SEND	  16
+
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_INSERT_KEY_DATA 0x2e30UL
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_DELETE_KEY_DATA 0x2e40UL
+/* L2 Context DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_DMA_ADDR		0x2fe0UL
+/* L2 Context Entry */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY		0x2fe1UL
+/* Prof tcam DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_DMA_ADDR		0x3030UL
+/* Prof tcam Entry */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY		0x3031UL
+/* WC DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
+/* WC Entry */
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+/* Action Data */
+#define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
+#define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
+
+#define TF_BITS2BYTES(x) (((x) + 7) >> 3)
+#define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
+
+struct tf_session_attach_input;
+struct tf_session_hw_resc_qcaps_input;
+struct tf_session_hw_resc_qcaps_output;
+struct tf_session_hw_resc_alloc_input;
+struct tf_session_hw_resc_alloc_output;
+struct tf_session_hw_resc_free_input;
+struct tf_session_hw_resc_flush_input;
+struct tf_session_sram_resc_qcaps_input;
+struct tf_session_sram_resc_qcaps_output;
+struct tf_session_sram_resc_alloc_input;
+struct tf_session_sram_resc_alloc_output;
+struct tf_session_sram_resc_free_input;
+struct tf_session_sram_resc_flush_input;
+struct tf_tbl_type_set_input;
+struct tf_tbl_type_get_input;
+struct tf_tbl_type_get_output;
+struct tf_em_internal_insert_input;
+struct tf_em_internal_insert_output;
+struct tf_em_internal_delete_input;
+/* Input params for session attach */
+typedef struct tf_session_attach_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* Session Name */
+	char				 session_name[TF_SESSION_NAME_MAX];
+} tf_session_attach_input_t, *ptf_session_attach_input_t;
+BUILD_BUG_ON(sizeof(tf_session_attach_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_output {
+	/* Control Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Unused */
+	uint8_t			  unused[4];
+	/* Minimum guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_min;
+	/* Maximum non-guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_max;
+	/* Minimum guaranteed number of profile functions */
+	uint16_t			 prof_func_min;
+	/* Maximum non-guaranteed number of profile functions */
+	uint16_t			 prof_func_max;
+	/* Minimum guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_min;
+	/* Maximum non-guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_max;
+	/* Minimum guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_min;
+	/* Maximum non-guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_max;
+	/* Minimum guaranteed number of EM records entries */
+	uint16_t			 em_record_entries_min;
+	/* Maximum non-guaranteed number of EM record entries */
+	uint16_t			 em_record_entries_max;
+	/* Minimum guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_min;
+	/* Maximum non-guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_max;
+	/* Minimum guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_min;
+	/* Maximum non-guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_max;
+	/* Minimum guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_min;
+	/* Maximum non-guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_max;
+	/* Minimum guaranteed number of meter instances */
+	uint16_t			 meter_inst_min;
+	/* Maximum non-guaranteed number of meter instances */
+	uint16_t			 meter_inst_max;
+	/* Minimum guaranteed number of mirrors */
+	uint16_t			 mirrors_min;
+	/* Maximum non-guaranteed number of mirrors */
+	uint16_t			 mirrors_max;
+	/* Minimum guaranteed number of UPAR */
+	uint16_t			 upar_min;
+	/* Maximum non-guaranteed number of UPAR */
+	uint16_t			 upar_max;
+	/* Minimum guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_min;
+	/* Maximum non-guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_max;
+	/* Minimum guaranteed number of L2 Functions */
+	uint16_t			 l2_func_min;
+	/* Maximum non-guaranteed number of L2 Functions */
+	uint16_t			 l2_func_max;
+	/* Minimum guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_min;
+	/* Maximum non-guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_max;
+	/* Minimum guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_min;
+	/* Maximum non-guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_max;
+	/* Minimum guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_min;
+	/* Maximum non-guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_max;
+	/* Minimum guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_min;
+	/* Maximum non-guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_max;
+	/* Minimum guaranteed number of metadata */
+	uint16_t			 metadata_min;
+	/* Maximum non-guaranteed number of metadata */
+	uint16_t			 metadata_max;
+	/* Minimum guaranteed number of CT states */
+	uint16_t			 ct_state_min;
+	/* Maximum non-guaranteed number of CT states */
+	uint16_t			 ct_state_max;
+	/* Minimum guaranteed number of range profiles */
+	uint16_t			 range_prof_min;
+	/* Maximum non-guaranteed number range profiles */
+	uint16_t			 range_prof_max;
+	/* Minimum guaranteed number of range entries */
+	uint16_t			 range_entries_min;
+	/* Maximum non-guaranteed number of range entries */
+	uint16_t			 range_entries_max;
+	/* Minimum guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_min;
+	/* Maximum non-guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_max;
+} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of L2 CTX TCAM entries to be allocated */
+	uint16_t			 num_l2_ctx_tcam_entries;
+	/* Number of profile functions to be allocated */
+	uint16_t			 num_prof_func_entries;
+	/* Number of profile TCAM entries to be allocated */
+	uint16_t			 num_prof_tcam_entries;
+	/* Number of EM profile ids to be allocated */
+	uint16_t			 num_em_prof_id;
+	/* Number of EM records entries to be allocated */
+	uint16_t			 num_em_record_entries;
+	/* Number of WC profiles ids to be allocated */
+	uint16_t			 num_wc_tcam_prof_id;
+	/* Number of WC TCAM entries to be allocated */
+	uint16_t			 num_wc_tcam_entries;
+	/* Number of meter profiles to be allocated */
+	uint16_t			 num_meter_profiles;
+	/* Number of meter instances to be allocated */
+	uint16_t			 num_meter_inst;
+	/* Number of mirrors to be allocated */
+	uint16_t			 num_mirrors;
+	/* Number of UPAR to be allocated */
+	uint16_t			 num_upar;
+	/* Number of SP TCAM entries to be allocated */
+	uint16_t			 num_sp_tcam_entries;
+	/* Number of L2 functions to be allocated */
+	uint16_t			 num_l2_func;
+	/* Number of flexible key templates to be allocated */
+	uint16_t			 num_flex_key_templ;
+	/* Number of table scopes to be allocated */
+	uint16_t			 num_tbl_scope;
+	/* Number of epoch0 entries to be allocated */
+	uint16_t			 num_epoch0_entries;
+	/* Number of epoch1 entries to be allocated */
+	uint16_t			 num_epoch1_entries;
+	/* Number of metadata to be allocated */
+	uint16_t			 num_metadata;
+	/* Number of CT states to be allocated */
+	uint16_t			 num_ct_state;
+	/* Number of range profiles to be allocated */
+	uint16_t			 num_range_prof;
+	/* Number of range Entries to be allocated */
+	uint16_t			 num_range_entries;
+	/* Number of LAG table entries to be allocated */
+	uint16_t			 num_lag_tbl_entries;
+} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_output {
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW free */
+typedef struct tf_session_hw_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW flush */
+typedef struct tf_session_hw_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_output {
+	/* Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Minimum guaranteed number of Full Action */
+	uint16_t			 full_action_min;
+	/* Maximum non-guaranteed number of Full Action */
+	uint16_t			 full_action_max;
+	/* Minimum guaranteed number of MCG */
+	uint16_t			 mcg_min;
+	/* Maximum non-guaranteed number of MCG */
+	uint16_t			 mcg_max;
+	/* Minimum guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_min;
+	/* Maximum non-guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_max;
+	/* Minimum guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_min;
+	/* Maximum non-guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_max;
+	/* Minimum guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_min;
+	/* Maximum non-guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_max;
+	/* Minimum guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_min;
+	/* Maximum non-guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_max;
+	/* Minimum guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_max;
+	/* Minimum guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_max;
+	/* Minimum guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_min;
+	/* Maximum non-guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_max;
+	/* Minimum guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_min;
+	/* Maximum non-guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_max;
+	/* Minimum guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_min;
+	/* Maximum non-guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_max;
+	/* Minimum guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_min;
+	/* Maximum non-guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_max;
+	/* Minimum guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_min;
+	/* Maximum non-guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_max;
+} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_input {
+	/* FW Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of full action SRAM entries to be allocated */
+	uint16_t			 num_full_action;
+	/* Number of multicast groups to be allocated */
+	uint16_t			 num_mcg;
+	/* Number of Encap 8B entries to be allocated */
+	uint16_t			 num_encap_8b;
+	/* Number of Encap 16B entries to be allocated */
+	uint16_t			 num_encap_16b;
+	/* Number of Encap 64B entries to be allocated */
+	uint16_t			 num_encap_64b;
+	/* Number of SP SMAC entries to be allocated */
+	uint16_t			 num_sp_smac;
+	/* Number of SP SMAC IPv4 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv4;
+	/* Number of SP SMAC IPv6 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv6;
+	/* Number of Counter 64B entries to be allocated */
+	uint16_t			 num_counter_64b;
+	/* Number of NAT source ports to be allocated */
+	uint16_t			 num_nat_sport;
+	/* Number of NAT destination ports to be allocated */
+	uint16_t			 num_nat_dport;
+	/* Number of NAT source iPV4 addresses to be allocated */
+	uint16_t			 num_nat_s_ipv4;
+	/* Number of NAT destination IPV4 addresses to be allocated */
+	uint16_t			 num_nat_d_ipv4;
+} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_output {
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM free */
+typedef struct tf_session_sram_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM flush */
+typedef struct tf_session_sram_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Index to get */
+	uint32_t			 index;
+} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_output {
+	/* Size of the data read in bytes */
+	uint16_t			 size;
+	/* Data read */
+	uint8_t			  data[TF_BULK_RECV];
+} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM internal rule insert */
+typedef struct tf_em_internal_insert_input {
+	/* Firmware Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* strength */
+	uint16_t			 strength;
+	/* index to action */
+	uint32_t			 action_ptr;
+	/* index of em record */
+	uint32_t			 em_record_idx;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for EM internal rule insert */
+typedef struct tf_em_internal_insert_output {
+	/* EM record pointer index */
+	uint16_t			 rptr_index;
+	/* EM record offset 0~3 */
+	uint8_t			  rptr_entry;
+} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_input {
+	/* Session Id */
+	uint32_t			 tf_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* EM internal flow hanndle */
+	uint64_t			 flow_handle;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_delete_input_t) <= TF_MAX_REQ_SIZE);
+
+#endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
new file mode 100644
index 0000000..6bafae5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+int
+tf_open_session(struct tf                    *tfp,
+		struct tf_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tfp_calloc_parms alloc_parms;
+	unsigned int domain, bus, slot, device;
+	uint8_t fw_session_id;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH)
+		return -ENOTSUP;
+
+	/* Build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			PMD_DRV_LOG(ERR,
+				    "Session is already open, rc:%d\n",
+				    rc);
+		else
+			PMD_DRV_LOG(ERR,
+				    "Open message send failed, rc:%d\n",
+				    rc);
+
+		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session_info);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+
+	/* Allocate core data for the session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session->core_data = alloc_parms.mem_va;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	/* Initialize Session */
+	session->device_type = parms->device_type;
+
+	/* Construct the Session ID */
+	session->session_id.internal.domain = domain;
+	session->session_id.internal.bus = bus;
+	session->session_id.internal.device = device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%d\n",
+			    rc);
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	/* Return session ID */
+	parms->session_id = session->session_id;
+
+	PMD_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
new file mode 100644
index 0000000..69433ac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -0,0 +1,347 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_CORE_H_
+#define _TF_CORE_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdio.h>
+
+#include "tf_project.h"
+
+/**
+ * @file
+ *
+ * Truflow Core API Header File
+ */
+
+/********** BEGIN Truflow Core DEFINITIONS **********/
+
+/**
+ * direction
+ */
+enum tf_dir {
+	TF_DIR_RX,  /**< Receive */
+	TF_DIR_TX,  /**< Transmit */
+	TF_DIR_MAX
+};
+
+/********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
+
+/**
+ * @page general General
+ *
+ * @ref tf_open_session
+ *
+ * @ref tf_attach_session
+ *
+ * @ref tf_close_session
+ */
+
+
+/** Session Version defines
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ */
+#define TF_SESSION_VER_MAJOR  1   /**< Major Version */
+#define TF_SESSION_VER_MINOR  0   /**< Minor Version */
+#define TF_SESSION_VER_UPDATE 0   /**< Update Version */
+
+/** Session Name
+ *
+ * Name of the TruFlow control channel interface.  Expects
+ * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
+ */
+#define TF_SESSION_NAME_MAX       64
+
+#define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Sesssion ID define */
+
+/** Session Identifier
+ *
+ * Unique session identifier which includes PCIe bus info to
+ * distinguish the PF and session info to identify the associated
+ * TruFlow session. Session ID is constructed from the passed in
+ * ctrl_chan_name in tf_open_session() together with an allocated
+ * fw_session_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_id {
+	uint32_t id;
+	struct {
+		uint8_t domain;
+		uint8_t bus;
+		uint8_t device;
+		uint8_t fw_session_id;
+	} internal;
+};
+
+/** Session Version
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ *
+ * Please see the TF_VER_MAJOR/MINOR and UPDATE defines.
+ */
+struct tf_session_version {
+	uint8_t major;
+	uint8_t minor;
+	uint8_t update;
+};
+
+/** Session supported device types
+ *
+ */
+enum tf_device_type {
+	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
+	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_MAX     /**< Maximum   */
+};
+
+/** TruFlow Session Information
+ *
+ * Structure defining a TruFlow Session, also known as a Management
+ * session. This structure is initialized at time of
+ * tf_open_session(). It is passed to all of the TruFlow APIs as way
+ * to prescribe and isolate resources between different TruFlow ULP
+ * Applications.
+ */
+struct tf_session_info {
+	/**
+	 * TrueFlow Version. Used to control the structure layout when
+	 * sharing sessions. No guarantee that a secondary process
+	 * would come from the same version of an executable.
+	 * TruFlow initializes this variable on tf_open_session().
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	struct tf_session_version ver;
+	/**
+	 * will be STAILQ_ENTRY(tf_session_info) next
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	void                 *next;
+	/**
+	 * Session ID is a unique identifier for the session. TruFlow
+	 * initializes this variable during tf_open_session()
+	 * processing.
+	 *
+	 * Owner:  TruFlow
+	 * Access: Truflow & ULP
+	 */
+	union tf_session_id   session_id;
+	/**
+	 * Protects access to core_data. Lock is initialized and owned
+	 * by ULP. TruFlow can access the core_data without checking
+	 * the lock.
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	uint8_t               spin_lock;
+	/**
+	 * The core_data holds the TruFlow tf_session data
+	 * structure. This memory is allocated and owned by TruFlow on
+	 * tf_open_session().
+	 *
+	 * TruFlow uses this memory for session management control
+	 * until the session is closed by ULP. Access control is done
+	 * by the spin_lock which ULP controls ahead of TruFlow API
+	 * calls.
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	void                 *core_data;
+	/**
+	 * The core_data_sz_bytes specifies the size of core_data in
+	 * bytes.
+	 *
+	 * The size is set by TruFlow on tf_open_session().
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	uint32_t              core_data_sz_bytes;
+};
+
+/** TruFlow handle
+ *
+ * Contains a pointer to the session info. Allocated by ULP and passed
+ * to TruFlow using tf_open_session(). TruFlow will populate the
+ * session info at that time. Additional 'opens' can be done using
+ * same session_info by using tf_attach_session().
+ *
+ * It is expected that ULP allocates this memory as shared memory.
+ *
+ * NOTE: This struct must be within the BNXT PMD struct bnxt
+ *       (bp). This allows use of container_of() to get access to the PMD.
+ */
+struct tf {
+	struct tf_session_info *session;
+};
+
+
+/**
+ * tf_open_session parameters definition.
+ */
+struct tf_open_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	/** [in] shadow_copy
+	 *
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow to keep track of
+	 * resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+	/** [in/out] session_id
+	 *
+	 * Session_id is unique per session.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id.
+	 *
+	 * The session_id allows a session to be shared between devices.
+	 */
+	union tf_session_id session_id;
+	/** [in] device type
+	 *
+	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 */
+	enum tf_device_type device_type;
+};
+
+/**
+ * Opens a new TruFlow management session.
+ *
+ * TruFlow will allocate session specific memory, shared memory, to
+ * hold its session data. This data is private to TruFlow.
+ *
+ * Multiple PFs can share the same session. An association, refcount,
+ * between session and PFs is maintained within TruFlow. Thus, a PF
+ * can attach to an existing session, see tf_attach_session().
+ *
+ * No other TruFlow APIs will succeed unless this API is first called and
+ * succeeds.
+ *
+ * tf_open_session() returns a session id that can be used on attach.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ * [in] parms
+ *   Pointer to open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_open_session(struct tf *tfp,
+		    struct tf_open_session_parms *parms);
+
+struct tf_attach_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] attach_chan_name
+	 *
+	 * String containing name of attach channel interface to be
+	 * used for this session.
+	 *
+	 * The attach_chan_name must be given to a 2nd process after
+	 * the primary process has been created. This is the
+	 * ctrl_chan_name of the primary process and is used to find
+	 * the shared memory for the session that the attach is going
+	 * to use.
+	 */
+	char attach_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] session_id
+	 *
+	 * Session_id is unique per session. For Attach the session_id
+	 * should be the session_id that was returned on the first
+	 * open.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id
+	 * during tf_open_session().
+	 *
+	 * A reference count will be incremented on attach. A session
+	 * is first fully closed when reference count is zero by
+	 * calling tf_close_session().
+	 */
+	union tf_session_id session_id;
+};
+
+/**
+ * Attaches to an existing session. Used when more than one PF wants
+ * to share a single session. In that case all TruFlow management
+ * traffic will be sent to the TruFlow firmware using the 'PF' that
+ * did the attach not the session ctrl channel.
+ *
+ * Attach will increment a ref count as to manage the shared session data.
+ *
+ * [in] tfp, pointer to TF handle
+ * [in] parms, pointer to attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_attach_session(struct tf *tfp,
+		      struct tf_attach_session_parms *parms);
+
+/**
+ * Closes an existing session. Cleans up all hardware and firmware
+ * state associated with the TruFlow application session when the last
+ * PF associated with the session results in refcount to be zero.
+ *
+ * Returns success or failure code.
+ */
+int tf_close_session(struct tf *tfp);
+
+#endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
new file mode 100644
index 0000000..2b68681
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tfp.h"
+
+#include "tf_msg_common.h"
+#include "tf_msg.h"
+#include "hsi_struct_def_dpdk.h"
+#include "hwrm_tf.h"
+
+/**
+ * Sends session open request to TF Firmware
+ */
+int
+tf_msg_session_open(struct tf *tfp,
+		    char *ctrl_chan_name,
+		    uint8_t *fw_session_id)
+{
+	int rc;
+	struct hwrm_tf_session_open_input req = { 0 };
+	struct hwrm_tf_session_open_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_OPEN;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_id = resp.fw_session_id;
+
+	return rc;
+}
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int
+tf_msg_session_qcfg(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_QCFG,
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
new file mode 100644
index 0000000..20ebf2e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_H_
+#define _TF_MSG_H_
+
+#include "tf_rm.h"
+
+struct tf;
+
+/**
+ * Sends session open request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_id
+ *   Pointer to the fw_session_id that is allocated on firmware side
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_open(struct tf *tfp,
+			char *ctrl_chan_name,
+			uint8_t *fw_session_id);
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int tf_msg_session_qcfg(struct tf *tfp);
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_query *hw_query);
+
+#endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg_common.h b/drivers/net/bnxt/tf_core/tf_msg_common.h
new file mode 100644
index 0000000..7a4e825
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg_common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_COMMON_H_
+#define _TF_MSG_COMMON_H_
+
+/* Communication Mailboxes */
+#define TF_CHIMP_MB 0
+#define TF_KONG_MB  1
+
+/* Helper to fill in the parms structure */
+#define MSG_PREP(parms, mb, type, subtype, req, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_REQ(parms, mb, type, subtype, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size  = 0;				\
+		parms.req_data  = NULL;				\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_RESP(parms, mb, type, subtype, req) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = 0;				\
+		parms.resp_data = NULL;				\
+	} while (0)
+
+#endif /* _TF_MSG_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_project.h b/drivers/net/bnxt/tf_core/tf_project.h
new file mode 100644
index 0000000..ab5f113
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_project.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_PROJECT_H_
+#define _TF_PROJECT_H_
+
+/* Wh+ support enabled */
+#ifndef TF_SUPPORT_P4
+#define TF_SUPPORT_P4 1
+#endif
+
+/* Shadow DB Support */
+#ifndef TF_SHADOW
+#define TF_SHADOW 0
+#endif
+
+/* Shared memory for session */
+#ifndef TF_SHARED
+#define TF_SHARED 0
+#endif
+
+#endif /* _TF_PROJECT_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
new file mode 100644
index 0000000..160abac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_RESOURCES_H_
+#define _TF_RESOURCES_H_
+
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/** HW Resource types
+ */
+enum tf_resource_type_hw {
+	/* Common HW resources for all chip variants */
+	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+	TF_RESC_TYPE_HW_PROF_FUNC,
+	TF_RESC_TYPE_HW_PROF_TCAM,
+	TF_RESC_TYPE_HW_EM_PROF_ID,
+	TF_RESC_TYPE_HW_EM_REC,
+	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	TF_RESC_TYPE_HW_WC_TCAM,
+	TF_RESC_TYPE_HW_METER_PROF,
+	TF_RESC_TYPE_HW_METER_INST,
+	TF_RESC_TYPE_HW_MIRROR,
+	TF_RESC_TYPE_HW_UPAR,
+	/* Wh+/Brd2 specific HW resources */
+	TF_RESC_TYPE_HW_SP_TCAM,
+	/* Brd2/Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_L2_FUNC,
+	/* Brd3, Brd4 common HW resources */
+	TF_RESC_TYPE_HW_FKB,
+	/* Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_TBL_SCOPE,
+	TF_RESC_TYPE_HW_EPOCH0,
+	TF_RESC_TYPE_HW_EPOCH1,
+	TF_RESC_TYPE_HW_METADATA,
+	TF_RESC_TYPE_HW_CT_STATE,
+	TF_RESC_TYPE_HW_RANGE_PROF,
+	TF_RESC_TYPE_HW_RANGE_ENTRY,
+	TF_RESC_TYPE_HW_LAG_ENTRY,
+	TF_RESC_TYPE_HW_MAX
+};
+#endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
new file mode 100644
index 0000000..5164d6b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_resources.h"
+#include "tf_core.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * Resource query single entry
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Resource query array of HW entities
+ */
+struct tf_rm_hw_query {
+	/** array of HW resource entries */
+	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+};
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
new file mode 100644
index 0000000..32e53c0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SESSION_H_
+#define _TF_SESSION_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "tf_core.h"
+#include "tf_rm.h"
+
+/** Session defines
+ */
+#define TF_SESSIONS_MAX	          1          /** max # sessions */
+#define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
+
+/** Session
+ *
+ * Shared memory containing private TruFlow session information.
+ * Through this structure the session can keep track of resource
+ * allocations and (if so configured) any shadow copy of flow
+ * information.
+ *
+ * Memory is assigned to the Truflow instance by way of
+ * tf_open_session. Memory is allocated and owned by i.e. ULP.
+ *
+ * Access control to this shared memory is handled by the spin_lock in
+ * tf_session_info.
+ */
+struct tf_session {
+	/** TrueFlow Version. Used to control the structure layout
+	 * when sharing sessions. No guarantee that a secondary
+	 * process would come from the same version of an executable.
+	 */
+	struct tf_session_version ver;
+
+	/** Device type, provided by tf_open_session().
+	 */
+	enum tf_device_type device_type;
+
+	/** Session ID, allocated by FW on tf_open_session().
+	 */
+	union tf_session_id session_id;
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow Core to keep track
+	 * of resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+
+	/**
+	 * Session Reference Count. To keep track of functions per
+	 * session the ref_count is incremented. There is also a
+	 * parallel TruFlow Firmware ref_count in case the TruFlow
+	 * Core goes away without informing the Firmware.
+	 */
+	uint8_t ref_count;
+
+	/** CRC32 seed table */
+#define TF_LKUP_SEED_MEM_SIZE 512
+	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+	/** Lookup3 init values */
+	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+};
+#endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
new file mode 100644
index 0000000..fb5c297
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -0,0 +1,163 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * see the individual elements.
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "tf_core.h"
+#include "tfp.h"
+#include "bnxt.h"
+#include "bnxt_hwrm.h"
+#include "tf_msg_common.h"
+
+/**
+ * Sends TruFlow msg to the TruFlow Firmware using
+ * a message specific HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_direct(struct tf *tfp,
+		    struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_direct(container_of(tfp,
+					       struct bnxt,
+					       tfp),
+					 use_kong_mb,
+					 parms->tf_type,
+					 parms->req_data,
+					 parms->req_size,
+					 parms->resp_data,
+					 parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Sends preformatted TruFlow msg to the TruFlow Firmware using
+ * the Truflow tunnel HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_tunneled(struct tf *tfp,
+		      struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_tunneled(container_of(tfp,
+						  struct bnxt,
+						  tfp),
+					   use_kong_mb,
+					   parms->tf_type,
+					   parms->tf_subtype,
+					   &parms->tf_resp_code,
+					   parms->req_data,
+					   parms->req_size,
+					   parms->resp_data,
+					   parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_calloc(struct tfp_calloc_parms *parms)
+{
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("tf",
+				    (parms->nitems * parms->size),
+				    parms->alignment);
+	if (parms->mem_va == NULL) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	parms->mem_pa = (void *)rte_mem_virt2iova(parms->mem_va);
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/**
+ * Frees the memory space pointed to by the provided pointer. The
+ * pointer must have been returned from the tfp_calloc().
+ */
+void
+tfp_free(void *addr)
+{
+	rte_free(addr);
+}
+
+/**
+ * Copies n bytes from src memory to dest memory. The memory areas
+ * must not overlap.
+ */
+void
+tfp_memcpy(void *dest, void *src, size_t n)
+{
+	rte_memcpy(dest, src, n);
+}
+
+/**
+ * Used to initialize portable spin lock
+ */
+void
+tfp_spinlock_init(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_init(&parms->slock);
+}
+
+/**
+ * Used to lock portable spin lock
+ */
+void
+tfp_spinlock_lock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_lock(&parms->slock);
+}
+
+/**
+ * Used to unlock portable spin lock
+ */
+void
+tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_unlock(&parms->slock);
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
new file mode 100644
index 0000000..8d5e94e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* This header file defines the Portability structures and APIs for
+ * TruFlow.
+ */
+
+#ifndef _TFP_H_
+#define _TFP_H_
+
+#include <rte_spinlock.h>
+
+/** Spinlock
+ */
+struct tfp_spinlock_parms {
+	rte_spinlock_t slock;
+};
+
+/**
+ * @file
+ *
+ * TrueFlow Portability API Header File
+ */
+
+/** send message parameter definition
+ */
+struct tfp_send_msg_parms {
+	/**
+	 * [in] mailbox, specifying the Mailbox to send the command on.
+	 */
+	uint32_t  mailbox;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_type.
+	 */
+	uint16_t  tf_type;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_subtype.
+	 */
+	uint16_t  tf_subtype;
+	/**
+	 * [out] tf_resp_code, response code from the internal tlv
+	 *       message. Only supported on tunneled messages.
+	 */
+	uint32_t tf_resp_code;
+	/**
+	 * [out] size, number specifying the request size of the data in bytes
+	 */
+	uint32_t req_size;
+	/**
+	 * [in] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *req_data;
+	/**
+	 * [out] size, number specifying the response size of the data in bytes
+	 */
+	uint32_t resp_size;
+	/**
+	 * [out] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *resp_data;
+};
+
+/** calloc parameter definition
+ */
+struct tfp_calloc_parms {
+	/**
+	 * [in] nitems, number specifying number of items to allocate.
+	 */
+	size_t nitems;
+	/**
+	 * [in] size, number specifying the size of each memory item
+	 *      requested. Size is in bytes.
+	 */
+	size_t size;
+	/**
+	 * [in] alignment, number indicates byte alignment required. 0
+	 *      - don't care, 16 - 16 byte alignment, 4K - 4K alignment etc
+	 */
+	size_t alignment;
+	/**
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/**
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+};
+
+/**
+ * @page Portability
+ *
+ * @ref tfp_send_direct
+ * @ref tfp_send_msg_tunneled
+ *
+ * @ref tfp_calloc
+ * @ref tfp_free
+ * @ref tfp_memcpy
+ *
+ * @ref tfp_spinlock_init
+ * @ref tfp_spinlock_lock
+ * @ref tfp_spinlock_unlock
+ *
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_direct(struct tf *tfp,
+			struct tfp_send_msg_parms *parms);
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_tunneled(struct tf                 *tfp,
+			  struct tfp_send_msg_parms *parms);
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * NOTE: Also performs virt2phy address conversion by default thus is
+ * can be expensive to invoke.
+ *
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -ENOMEM        - No memory available
+ *   -EINVAL        - Parameter error
+ */
+int tfp_calloc(struct tfp_calloc_parms *parms);
+
+void tfp_free(void *addr);
+void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+#endif /* _TFP_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 05/34] net/bnxt: add initial tf core session close support
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (3 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
                       ` (29 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session and resource support functions
- Add Truflow session close API and related message support functions
  for both session and hw resources

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/bitalloc.c     | 364 +++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h     | 119 ++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |  86 +++++++
 drivers/net/bnxt/tf_core/tf_msg.c       | 401 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  42 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |  24 +-
 drivers/net/bnxt/tf_core/tf_rm.h        | 113 +++++++++
 drivers/net/bnxt/tf_core/tf_session.h   |   1 +
 9 files changed, 1146 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 8a68059..8474673 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -48,6 +48,7 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
new file mode 100644
index 0000000..fb4df9a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bitalloc.h"
+
+#define BITALLOC_MAX_LEVELS 6
+
+/* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
+static int
+ba_ffs(bitalloc_word_t v)
+{
+	int c; /* c will be the number of zero bits on the right plus 1 */
+
+	v &= -v;
+	c = v ? 32 : 0;
+
+	if (v & 0x0000FFFF)
+		c -= 16;
+	if (v & 0x00FF00FF)
+		c -= 8;
+	if (v & 0x0F0F0F0F)
+		c -= 4;
+	if (v & 0x33333333)
+		c -= 2;
+	if (v & 0x55555555)
+		c -= 1;
+
+	return c;
+}
+
+int
+ba_init(struct bitalloc *pool, int size)
+{
+	bitalloc_word_t *mem = (bitalloc_word_t *)pool;
+	int       i;
+
+	/* Initialize */
+	pool->size = 0;
+
+	if (size < 1 || size > BITALLOC_MAX_SIZE)
+		return -1;
+
+	/* Zero structure */
+	for (i = 0;
+	     i < (int)(BITALLOC_SIZEOF(size) / sizeof(bitalloc_word_t));
+	     i++)
+		mem[i] = 0;
+
+	/* Initialize */
+	pool->size = size;
+
+	/* Embed number of words of next level, after each level */
+	int words[BITALLOC_MAX_LEVELS];
+	int lev = 0;
+	int offset = 0;
+
+	words[0] = (size + 31) / 32;
+	while (words[lev] > 1) {
+		lev++;
+		words[lev] = (words[lev - 1] + 31) / 32;
+	}
+
+	while (lev) {
+		offset += words[lev];
+		pool->storage[offset++] = words[--lev];
+	}
+
+	/* Free the entire pool */
+	for (i = 0; i < size; i++)
+		ba_free(pool, i);
+
+	return 0;
+}
+
+static int
+ba_alloc_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int              index,
+		int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc = ba_ffs(storage[index]);
+	int       r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index * 32 + loc,
+				    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
+}
+
+static int
+ba_alloc_index_helper(struct bitalloc *pool,
+		      int              offset,
+		      int              words,
+		      unsigned int     size,
+		      int             *index,
+		      int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_alloc_index_helper(pool,
+					  offset + words + 1,
+					  storage[words],
+					  size * 32,
+					  index,
+					  clear);
+	else
+		r = 1; /* Check if already allocated */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? 0 : -1;
+		if (r == 0) {
+			*clear = 1;
+			pool->free_count--;
+		}
+	}
+
+	if (*clear) {
+		storage[*index] &= ~(1 << loc);
+		*clear = (storage[*index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_index(struct bitalloc *pool, int index)
+{
+	int clear = 0;
+	int index_copy = index;
+
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0)
+		return index;
+	else
+		return -1;
+}
+
+static int
+ba_inuse_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_inuse_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index);
+	else
+		r = 1; /* Check if in use */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1)
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+
+	return r;
+}
+
+int
+ba_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_inuse_helper(pool, 0, 1, 32, &index) == 0;
+}
+
+static int
+ba_free_helper(struct bitalloc *pool,
+	       int              offset,
+	       int              words,
+	       unsigned int     size,
+	       int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_free_helper(pool,
+				   offset + words + 1,
+				   storage[words],
+				   size * 32,
+				   index);
+	else
+		r = 1; /* Check if already free */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+		if (r == 0)
+			pool->free_count++;
+	}
+
+	if (r == 0)
+		storage[*index] |= (1 << loc);
+
+	return r;
+}
+
+int
+ba_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index);
+}
+
+int
+ba_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index) + 1;
+}
+
+int
+ba_free_count(struct bitalloc *pool)
+{
+	return (int)pool->free_count;
+}
+
+int ba_inuse_count(struct bitalloc *pool)
+{
+	return (int)(pool->size) - (int)(pool->free_count);
+}
+
+static int
+ba_find_next_helper(struct bitalloc *pool,
+		    int              offset,
+		    int              words,
+		    unsigned int     size,
+		    int             *index,
+		    int              free)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc, r, bottom = 0;
+
+	if (pool->size > size)
+		r = ba_find_next_helper(pool,
+					offset + words + 1,
+					storage[words],
+					size * 32,
+					index,
+					free);
+	else
+		bottom = 1; /* Bottom of tree */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (bottom) {
+		int bit_index = *index * 32;
+
+		loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc));
+		if (loc > 0) {
+			loc--;
+			r = (bit_index + loc);
+			if (r >= (int)pool->size)
+				r = -1;
+		} else {
+			/* Loop over array at bottom of tree */
+			r = -1;
+			bit_index += 32;
+			*index = *index + 1;
+			while ((int)pool->size > bit_index) {
+				loc = ba_ffs(~storage[*index]);
+
+				if (loc > 0) {
+					loc--;
+					r = (bit_index + loc);
+					if (r >= (int)pool->size)
+						r = -1;
+					break;
+				}
+				bit_index += 32;
+				*index = *index + 1;
+			}
+		}
+	}
+
+	if (r >= 0 && (free)) {
+		if (bottom)
+			pool->free_count++;
+		storage[*index] |= (1 << loc);
+	}
+
+	return r;
+}
+
+int
+ba_find_next_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 0);
+}
+
+int
+ba_find_next_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 1);
+}
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
new file mode 100644
index 0000000..563c853
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BITALLOC_H_
+#define _BITALLOC_H_
+
+#include <stdint.h>
+
+/* Bitalloc works on uint32_t as its word size */
+typedef uint32_t bitalloc_word_t;
+
+struct bitalloc {
+	bitalloc_word_t size;
+	bitalloc_word_t free_count;
+	bitalloc_word_t storage[1];
+};
+
+#define BA_L0(s) (((s) + 31) / 32)
+#define BA_L1(s) ((BA_L0(s) + 31) / 32)
+#define BA_L2(s) ((BA_L1(s) + 31) / 32)
+#define BA_L3(s) ((BA_L2(s) + 31) / 32)
+#define BA_L4(s) ((BA_L3(s) + 31) / 32)
+
+#define BITALLOC_SIZEOF(size)                                    \
+	(sizeof(struct bitalloc) *				 \
+	 (((sizeof(struct bitalloc) +				 \
+	    sizeof(struct bitalloc) - 1 +			 \
+	    (sizeof(bitalloc_word_t) *				 \
+	     ((BA_L0(size) - 1) +				 \
+	      ((BA_L0(size) == 1) ? 0 : (BA_L1(size) + 1)) +	 \
+	      ((BA_L1(size) == 1) ? 0 : (BA_L2(size) + 1)) +	 \
+	      ((BA_L2(size) == 1) ? 0 : (BA_L3(size) + 1)) +	 \
+	      ((BA_L3(size) == 1) ? 0 : (BA_L4(size) + 1)))))) / \
+	  sizeof(struct bitalloc)))
+
+#define BITALLOC_MAX_SIZE (32 * 32 * 32 * 32 * 32 * 32)
+
+/* The instantiation of a bitalloc looks a bit odd. Since a
+ * bit allocator has variable storage, we need a way to get a
+ * a pointer to a bitalloc structure that points to the correct
+ * amount of storage. We do this by creating an array of
+ * bitalloc where the first element in the array is the
+ * actual bitalloc base structure, and the remaining elements
+ * in the array provide the storage for it. This approach allows
+ * instances to be individual variables or members of larger
+ * structures.
+ */
+#define BITALLOC_INST(name, size)                      \
+	struct bitalloc name[(BITALLOC_SIZEOF(size) /  \
+			      sizeof(struct bitalloc))]
+
+/* Symbolic return codes */
+#define BA_SUCCESS           0
+#define BA_FAIL             -1
+#define BA_ENTRY_FREE        0
+#define BA_ENTRY_IN_USE      1
+#define BA_NO_ENTRY_FOUND   -1
+
+/**
+ * Initializates the bitallocator
+ *
+ * Returns 0 on success, -1 on failure.  Size is arbitrary up to
+ * BITALLOC_MAX_SIZE
+ */
+int ba_init(struct bitalloc *pool, int size);
+
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc(struct bitalloc *pool);
+int ba_alloc_index(struct bitalloc *pool, int index);
+
+/**
+ * Query a particular index in a pool to check if its in use.
+ *
+ * Returns -1 on invalid index, 1 if the index is allocated, 0 if it
+ * is free
+ */
+int ba_inuse(struct bitalloc *pool, int index);
+
+/**
+ * Variant of ba_inuse that frees the index if it is allocated, same
+ * return codes as ba_inuse
+ */
+int ba_inuse_free(struct bitalloc *pool, int index);
+
+/**
+ * Find next index that is in use, start checking at index 'idx'
+ *
+ * Returns next index that is in use on success, or
+ * -1 if no in use index is found
+ */
+int ba_find_next_inuse(struct bitalloc *pool, int idx);
+
+/**
+ * Variant of ba_find_next_inuse that also frees the next in use index,
+ * same return codes as ba_find_next_inuse
+ */
+int ba_find_next_inuse_free(struct bitalloc *pool, int idx);
+
+/**
+ * Multiple freeing of the same index has no negative side effects,
+ * but will return -1.  returns -1 on failure, 0 on success.
+ */
+int ba_free(struct bitalloc *pool, int index);
+
+/**
+ * Returns the pool's free count
+ */
+int ba_free_count(struct bitalloc *pool);
+
+/**
+ * Returns the pool's in use count
+ */
+int ba_inuse_count(struct bitalloc *pool);
+
+#endif /* _BITALLOC_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6bafae5..3c5d55d 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,10 +7,18 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
+#include "bitalloc.h"
 #include "bnxt.h"
 
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -141,5 +149,83 @@ tf_open_session(struct tf                    *tfp,
 	return rc;
 
  cleanup_close:
+	tf_close_session(tfp);
 	return -EINVAL;
 }
+
+int
+tf_attach_session(struct tf *tfp __rte_unused,
+		  struct tf_attach_session_parms *parms __rte_unused)
+{
+#if (TF_SHARED == 1)
+	int rc;
+
+	if (tfp == NULL)
+		return -EINVAL;
+
+	/* - Open the shared memory for the attach_chan_name
+	 * - Point to the shared session for this Device instance
+	 * - Check that session is valid
+	 * - Attach to the firmware so it can record there is more
+	 *   than one client of the session.
+	 */
+
+	if (tfp->session) {
+		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+			rc = tf_msg_session_attach(tfp,
+						   parms->ctrl_chan_name,
+						   parms->session_id);
+		}
+	}
+#endif /* TF_SHARED */
+	return -1;
+}
+
+int
+tf_close_session(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	struct tf_session *tfs;
+	union tf_session_id session_id;
+
+	if (tfp == NULL || tfp->session == NULL)
+		return -EINVAL;
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "Message send failed, rc:%d\n",
+				    rc);
+		}
+
+		/* Update the ref_count */
+		tfs->ref_count--;
+	}
+
+	session_id = tfs->session_id;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	PMD_DRV_LOG(INFO,
+		    "Session closed, session_id:%d\n",
+		    session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    session_id.internal.domain,
+		    session_id.internal.bus,
+		    session_id.internal.device,
+		    session_id.internal.fw_session_id);
+
+	return rc_close;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 2b68681..e05aea7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,82 @@
 #include "hwrm_tf.h"
 
 /**
+ * Endian converts min and max values from the HW response to the query
+ */
+#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
+	(query)->hw_query[index].min =                                       \
+		tfp_le_to_cpu_16(response. element ## _min);                 \
+	(query)->hw_query[index].max =                                       \
+		tfp_le_to_cpu_16(response. element ## _max);                 \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the alloc to the request
+ */
+#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
+	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(hw_entry[index].start);                     \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
+	hw_entry[index].start =                                              \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	hw_entry[index].stride =                                             \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
+ * Endian converts min and max values from the SRAM response to the
+ * query
+ */
+#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
+	(query)->sram_query[index].min =                                     \
+		tfp_le_to_cpu_16(response.element ## _min);                  \
+	(query)->sram_query[index].max =                                     \
+		tfp_le_to_cpu_16(response.element ## _max);                  \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the action (alloc) to
+ * the request
+ */
+#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
+	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(sram_entry[index].start);                   \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
+	sram_entry[index].start =                                            \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	sram_entry[index].stride =                                           \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
  * Sends session open request to TF Firmware
  */
 int
@@ -51,6 +127,45 @@ tf_msg_session_open(struct tf *tfp,
 }
 
 /**
+ * Sends session attach request to TF Firmware
+ */
+int
+tf_msg_session_attach(struct tf *tfp __rte_unused,
+		      char *ctrl_chan_name __rte_unused,
+		      uint8_t tf_fw_session_id __rte_unused)
+{
+	return -1;
+}
+
+/**
+ * Sends session close request to TF Firmware
+ */
+int
+tf_msg_session_close(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_close_input req = { 0 };
+	struct hwrm_tf_session_close_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_CLOSE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
  * Sends session query config request to TF Firmware
  */
 int
@@ -77,3 +192,289 @@ tf_msg_session_qcfg(struct tf *tfp)
 				 &parms);
 	return rc;
 }
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_query *query)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_qcaps_input req = { 0 };
+	struct tf_session_hw_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(query, 0, sizeof(*query));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
+			    resp, meter_inst);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
+			     struct tf_rm_entry *hw_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_alloc_input req = { 0 };
+	struct tf_session_hw_resc_alloc_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			   l2_ctx_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			   prof_func_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			   prof_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			   em_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
+			   em_record_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			   wc_tcam_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
+			   wc_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
+			   meter_profiles);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
+			   meter_inst);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
+			   mirrors);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
+			   upar);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
+			   sp_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
+			   l2_func);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
+			   flex_key_templ);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			   tbl_scope);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
+			   epoch0_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
+			   epoch1_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
+			   metadata);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
+			   ct_state);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			   range_prof);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			   range_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			   lag_tbl_entries);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
+			    meter_inst);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_free(struct tf *tfp,
+			    enum tf_dir dir,
+			    struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_HW_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 20ebf2e..da5ccf3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -30,6 +30,34 @@ int tf_msg_session_open(struct tf *tfp,
 			uint8_t *fw_session_id);
 
 /**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] fw_session_id
+ *   Pointer to the fw_session_id that is assigned to the session at
+ *   time of session open
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_attach(struct tf *tfp,
+			  char *ctrl_channel_name,
+			  uint8_t tf_fw_session_id);
+
+/**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_close(struct tf *tfp);
+
+/**
  * Sends session query config request to TF Firmware
  */
 int tf_msg_session_qcfg(struct tf *tfp);
@@ -41,4 +69,18 @@ int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
 				 enum tf_dir dir,
 				 struct tf_rm_hw_query *hw_query);
 
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int tf_msg_session_hw_resc_alloc(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_alloc *hw_alloc,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int tf_msg_session_hw_resc_free(struct tf *tfp,
+				enum tf_dir dir,
+				struct tf_rm_entry *hw_entry);
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 160abac..8dbb2f9 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,11 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -43,4 +38,23 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_LAG_ENTRY,
 	TF_RESC_TYPE_HW_MAX
 };
+
+/** HW Resource types
+ */
+enum tf_resource_type_sram {
+	TF_RESC_TYPE_SRAM_FULL_ACTION,
+	TF_RESC_TYPE_SRAM_MCG,
+	TF_RESC_TYPE_SRAM_ENCAP_8B,
+	TF_RESC_TYPE_SRAM_ENCAP_16B,
+	TF_RESC_TYPE_SRAM_ENCAP_64B,
+	TF_RESC_TYPE_SRAM_SP_SMAC,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	TF_RESC_TYPE_SRAM_COUNTER_64B,
+	TF_RESC_TYPE_SRAM_NAT_SPORT,
+	TF_RESC_TYPE_SRAM_NAT_DPORT,
+	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+	TF_RESC_TYPE_SRAM_MAX
+};
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5164d6b..57ce19b 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -8,10 +8,52 @@
 
 #include "tf_resources.h"
 #include "tf_core.h"
+#include "bitalloc.h"
 
 struct tf;
 struct tf_session;
 
+/* Internal macro to determine appropriate allocation pools based on
+ * DIRECTION parm, also performs error checking for DIRECTION parm. The
+ * SESSION_POOL and SESSION pointers are set appropriately upon
+ * successful return (the GLOBAL_POOL is used to globally manage
+ * resource allocation and the SESSION_POOL is used to track the
+ * resources that have been allocated to the session)
+ *
+ * parameters:
+ *   struct tfp        *tfp
+ *   enum tf_dir        direction
+ *   struct bitalloc  **session_pool
+ *   string             base_pool_name - used to form pointers to the
+ *					 appropriate bit allocation
+ *					 pools, both directions of the
+ *					 session pools must have same
+ *					 base name, for example if
+ *					 POOL_NAME is feat_pool: - the
+ *					 ptr's to the session pools
+ *					 are feat_pool_rx feat_pool_tx
+ *
+ *  int                  rc - return code
+ *			      0 - Success
+ *			     -1 - invalid DIRECTION parm
+ */
+#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
+		(rc) = 0;						\
+		if ((direction) == TF_DIR_RX) {				\
+			*(session_pool) = (tfs)->pool_name ## _RX;	\
+		} else if ((direction) == TF_DIR_TX) {			\
+			*(session_pool) = (tfs)->pool_name ## _TX;	\
+		} else {						\
+			rc = -1;					\
+		}							\
+	} while (0)
+
+#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _RX)
+
+#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _TX)
+
 /**
  * Resource query single entry
  */
@@ -23,6 +65,16 @@ struct tf_rm_query_entry {
 };
 
 /**
+ * Resource single entry
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
  * Resource query array of HW entities
  */
 struct tf_rm_hw_query {
@@ -30,4 +82,65 @@ struct tf_rm_hw_query {
 	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
 };
 
+/**
+ * Resource allocation array of HW entities
+ */
+struct tf_rm_hw_alloc {
+	/** array of HW resource entries */
+	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+};
+
+/**
+ * Resource query array of SRAM entities
+ */
+struct tf_rm_sram_query {
+	/** array of SRAM resource entries */
+	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource allocation array of SRAM entities
+ */
+struct tf_rm_sram_alloc {
+	/** array of SRAM resource entries */
+	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Initializes the Resource Manager and the associated database
+ * entries for HW and SRAM resources. Must be called before any other
+ * Resource Manager functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ */
+void tf_rm_init(struct tf *tfp);
+
+/**
+ * Allocates and validates both HW and SRAM resources per the NVM
+ * configuration. If any allocation fails all resources for the
+ * session is deallocated.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate_validate(struct tf *tfp);
+
+/**
+ * Closes the Resource Manager and frees all allocated resources per
+ * the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTEMPTY) if resources are not cleaned up before close
+ */
+int tf_rm_close(struct tf *tfp);
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 32e53c0..651d3ee 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -9,6 +9,7 @@
 #include <stdint.h>
 #include <stdlib.h>
 
+#include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 06/34] net/bnxt: add tf core session sram functions
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (4 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
                       ` (28 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session resource support functionality
- Add TruFlow session hw flush capability as well as
  sram support functions.
- Add resource definitions for session pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/rand.c         |  47 ++++
 drivers/net/bnxt/tf_core/rand.h         |  36 +++
 drivers/net/bnxt/tf_core/tf_core.c      |   1 +
 drivers/net/bnxt/tf_core/tf_msg.c       | 344 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  37 +++
 drivers/net/bnxt/tf_core/tf_resources.h | 482 ++++++++++++++++++++++++++++++++
 7 files changed, 948 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 8474673..c39c098 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,7 @@ endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/rand.c b/drivers/net/bnxt/tf_core/rand.c
new file mode 100644
index 0000000..32028df
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+
+#include <stdio.h>
+#include <stdint.h>
+#include "rand.h"
+
+#define TF_RAND_LFSR_INIT_VALUE 0xACE1u
+
+uint16_t lfsr = TF_RAND_LFSR_INIT_VALUE;
+uint32_t bit;
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ *   uint16_t number
+ */
+uint16_t rand16(void)
+{
+	bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5)) & 1;
+	return lfsr = (lfsr >> 1) | (bit << 15);
+}
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ *   uint32_t number
+ */
+uint32_t rand32(void)
+{
+	return (rand16() << 16) | rand16();
+}
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ */
+void rand_init(void)
+{
+	lfsr = TF_RAND_LFSR_INIT_VALUE;
+	bit = 0;
+}
diff --git a/drivers/net/bnxt/tf_core/rand.h b/drivers/net/bnxt/tf_core/rand.h
new file mode 100644
index 0000000..31cd76e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+#ifndef __RAND_H__
+#define __RAND_H__
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ * uint16_t number
+ *
+ */
+uint16_t rand16(void);
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ * uint32_t number
+ *
+ */
+uint32_t rand32(void);
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ *
+ * Returns:
+ *
+ */
+void rand_init(void);
+
+#endif /* __RAND_H__ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3c5d55d..d82f746 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -12,6 +12,7 @@
 #include "tfp.h"
 #include "bitalloc.h"
 #include "bnxt.h"
+#include "rand.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e05aea7..4ce2bc5 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -478,3 +478,347 @@ tf_msg_session_hw_resc_free(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_flush(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_query *query __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_qcaps_input req = { 0 };
+	struct tf_session_sram_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
+			      full_action);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
+			      sp_smac_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
+			      sp_smac_ipv6);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
+			       struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_alloc_input req = { 0 };
+	struct tf_session_sram_resc_alloc_output resp;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(&resp, 0, sizeof(resp));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			     full_action);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
+			     mcg);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			     encap_8b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			     encap_16b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			     encap_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			     sp_smac);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			     req, sp_smac_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			     req, sp_smac_ipv6);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
+			     req, counter_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			     nat_sport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			     nat_dport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			     nat_s_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			     nat_d_ipv4);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
+			      resp, full_action);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			      resp, sp_smac_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			      resp, sp_smac_ipv6);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
+			      enum tf_dir dir,
+			      struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_flush(struct tf *tfp,
+			       enum tf_dir dir,
+			       struct tf_rm_entry *sram_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index da5ccf3..057de84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -83,4 +83,41 @@ int tf_msg_session_hw_resc_alloc(struct tf *tfp,
 int tf_msg_session_hw_resc_free(struct tf *tfp,
 				enum tf_dir dir,
 				struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int tf_msg_session_hw_resc_flush(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_query *sram_query);
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int tf_msg_session_sram_resc_alloc(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_alloc *sram_alloc,
+				   struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int tf_msg_session_sram_resc_free(struct tf *tfp,
+				  enum tf_dir dir,
+				  struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int tf_msg_session_sram_resc_flush(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_entry *sram_entry);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 8dbb2f9..05e131f 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,6 +6,487 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/* Common HW resources for all chip variants */
+#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
+					    * entries
+					    */
+#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
+#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
+					    * TCAM
+					    */
+#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
+					    * IDs
+					    */
+#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
+#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
+					    * TCAM. A slices is a WC TCAM entry.
+					    */
+#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
+#define TF_NUM_METER             1024      /* < Number of meter instances */
+#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
+#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
+
+/* Wh+/Brd2 specific HW resources */
+#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
+					    * entries
+					    */
+
+/* Brd2/Brd4 specific HW resources */
+#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
+
+
+/* Brd3, Brd4 common HW resources */
+#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
+					    * templates
+					    */
+
+/* Brd4 specific HW resources */
+#define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
+#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
+#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
+#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
+#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
+					    * States
+					    */
+#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
+#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
+#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
+
+/*
+ * Common for the Reserved Resource defines below:
+ *
+ * - HW Resources
+ *   For resources where a priority level plays a role, i.e. l2 ctx
+ *   tcam entries, both a number of resources and a begin/end pair is
+ *   required. The begin/end is used to assure TFLIB gets the correct
+ *   priority setting for that resource.
+ *
+ *   For EM records there is no priority required thus a number of
+ *   resources is sufficient.
+ *
+ *   Example, TCAM:
+ *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
+ *     0-63 as HW presents 0 as the highest priority entry.
+ *
+ * - SRAM Resources
+ *   Handled as regular resources as there is no priority required.
+ *
+ * Common for these resources is that they are handled per direction,
+ * rx/tx.
+ */
+
+/* HW Resources */
+
+/* L2 CTX */
+#define TF_RSVD_L2_CTXT_TCAM_RX                   64
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
+#define TF_RSVD_L2_CTXT_TCAM_TX                   960
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
+
+/* Profiler */
+#define TF_RSVD_PROF_FUNC_RX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
+#define TF_RSVD_PROF_FUNC_TX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
+
+#define TF_RSVD_PROF_TCAM_RX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
+#define TF_RSVD_PROF_TCAM_TX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
+
+/* EM Profiles IDs */
+#define TF_RSVD_EM_PROF_ID_RX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
+#define TF_RSVD_EM_PROF_ID_TX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
+
+/* EM Records */
+#define TF_RSVD_EM_REC_RX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
+#define TF_RSVD_EM_REC_TX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
+
+/* Wildcard */
+#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
+#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
+
+#define TF_RSVD_WC_TCAM_RX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_WC_TCAM_END_IDX_RX                63
+#define TF_RSVD_WC_TCAM_TX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_WC_TCAM_END_IDX_TX                63
+
+#define TF_RSVD_METER_PROF_RX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_PROF_END_IDX_RX             0
+#define TF_RSVD_METER_PROF_TX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_PROF_END_IDX_TX             0
+
+#define TF_RSVD_METER_INST_RX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_INST_END_IDX_RX             0
+#define TF_RSVD_METER_INST_TX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_INST_END_IDX_TX             0
+
+/* Mirror */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
+#define TF_RSVD_MIRROR_END_IDX_RX                 0
+#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
+#define TF_RSVD_MIRROR_END_IDX_TX                 0
+
+/* UPAR */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_UPAR_RX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
+#define TF_RSVD_UPAR_END_IDX_RX                   0
+#define TF_RSVD_UPAR_TX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
+#define TF_RSVD_UPAR_END_IDX_TX                   0
+
+/* Source Properties */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SP_TCAM_RX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_SP_TCAM_END_IDX_RX                0
+#define TF_RSVD_SP_TCAM_TX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_SP_TCAM_END_IDX_TX                0
+
+/* L2 Func */
+#define TF_RSVD_L2_FUNC_RX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
+#define TF_RSVD_L2_FUNC_END_IDX_RX                0
+#define TF_RSVD_L2_FUNC_TX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
+#define TF_RSVD_L2_FUNC_END_IDX_TX                0
+
+/* FKB */
+#define TF_RSVD_FKB_RX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
+#define TF_RSVD_FKB_END_IDX_RX                    0
+#define TF_RSVD_FKB_TX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
+#define TF_RSVD_FKB_END_IDX_TX                    0
+
+/* TBL Scope */
+#define TF_RSVD_TBL_SCOPE_RX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
+#define TF_RSVD_TBL_SCOPE_TX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
+
+/* EPOCH0 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH0_RX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH0_END_IDX_RX                 0
+#define TF_RSVD_EPOCH0_TX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH0_END_IDX_TX                 0
+
+/* EPOCH1 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH1_RX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH1_END_IDX_RX                 0
+#define TF_RSVD_EPOCH1_TX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH1_END_IDX_TX                 0
+
+/* METADATA */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_METADATA_RX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
+#define TF_RSVD_METADATA_END_IDX_RX               0
+#define TF_RSVD_METADATA_TX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
+#define TF_RSVD_METADATA_END_IDX_TX               0
+
+/* CT_STATE */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_CT_STATE_RX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
+#define TF_RSVD_CT_STATE_END_IDX_RX               0
+#define TF_RSVD_CT_STATE_TX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
+#define TF_RSVD_CT_STATE_END_IDX_TX               0
+
+/* RANGE_PROF */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_PROF_RX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
+#define TF_RSVD_RANGE_PROF_TX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
+
+/* RANGE_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_ENTRY_RX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
+#define TF_RSVD_RANGE_ENTRY_TX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
+
+/* LAG_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_LAG_ENTRY_RX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
+#define TF_RSVD_LAG_ENTRY_TX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
+
+
+/* SRAM - Resources
+ * Limited to the types that CFA provides.
+ */
+#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
+
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SRAM_MCG_RX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
+/* Multicast Group on TX is not supported */
+#define TF_RSVD_SRAM_MCG_TX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
+
+/* First encap of 8B RX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
+/* First encap of 8B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
+
+#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
+/* First encap of 16B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
+
+/* Encap of 64B on RX is not supported */
+#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
+/* First encap of 64B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_SP_SMAC_RX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
+#define TF_RSVD_SRAM_SP_SMAC_TX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
+
+/* SRAM SP IPV4 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
+
+/* SRAM SP IPV6 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
+/* Not yet supported fully in infra */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
+
+#define TF_RSVD_SRAM_COUNTER_64B_RX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_COUNTER_64B_TX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
+
+#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
+
+#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
+
+/* HW Resource Pool names */
+
+#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
+#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
+#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
+
+#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
+#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
+#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
+
+#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
+#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
+#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
+
+#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
+#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
+#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
+
+#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
+
+#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
+#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
+#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
+
+#define TF_METER_PROF_POOL_NAME           meter_prof_pool
+#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
+#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
+
+#define TF_METER_INST_POOL_NAME           meter_inst_pool
+#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
+#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
+
+#define TF_MIRROR_POOL_NAME               mirror_pool
+#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
+#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
+
+#define TF_UPAR_POOL_NAME                 upar_pool
+#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
+#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
+
+#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
+#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
+#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
+
+#define TF_FKB_POOL_NAME                  fkb_pool
+#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
+#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
+
+#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
+#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
+#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
+
+#define TF_L2_FUNC_POOL_NAME              l2_func_pool
+#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
+#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
+
+#define TF_EPOCH0_POOL_NAME               epoch0_pool
+#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
+#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
+
+#define TF_EPOCH1_POOL_NAME               epoch1_pool
+#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
+#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
+
+#define TF_METADATA_POOL_NAME             metadata_pool
+#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
+#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
+
+#define TF_CT_STATE_POOL_NAME             ct_state_pool
+#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
+#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
+
+#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
+#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
+#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
+
+#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
+#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
+#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
+
+#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
+#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
+#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
+
+/* SRAM Resource Pool names */
+#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
+#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
+#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
+
+#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
+#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
+#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
+
+#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
+#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
+#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
+
+#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
+#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
+#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
+
+#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
+#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
+#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
+
+#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
+#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
+#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
+
+#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
+#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
+#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
+
+#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
+#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
+#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
+
+#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
+#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
+#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
+
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
+
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
+
+/* Sw Resource Pool Names */
+
+#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
+#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
+#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
+
+
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -57,4 +538,5 @@ enum tf_resource_type_sram {
 	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
 	TF_RESC_TYPE_SRAM_MAX
 };
+
 #endif /* _TF_RESOURCES_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 07/34] net/bnxt: add initial tf core resource mgmt support
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (5 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
                       ` (27 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow public API definitions for resources
  as well as RM infrastructure

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile             |    1 +
 drivers/net/bnxt/tf_core/tf_core.c    |   40 +
 drivers/net/bnxt/tf_core/tf_core.h    |  125 +++
 drivers/net/bnxt/tf_core/tf_rm.c      | 1731 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h      |  175 ++++
 drivers/net/bnxt/tf_core/tf_session.h |  206 +++-
 6 files changed, 2277 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index c39c098..02f8c3f 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index d82f746..7d76efa 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -20,6 +20,29 @@ static inline uint32_t SWAP_WORDS32(uint32_t val32)
 		((val32 & 0xffff0000) >> 16));
 }
 
+static void tf_seeds_init(struct tf_session *session)
+{
+	int i;
+	uint32_t r;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
+		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
+						SWAP_WORDS32(rand32());
+
+	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+	}
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -109,6 +132,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
 	session->session_id.internal.domain = domain;
@@ -125,6 +149,16 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Adjust the Session with what firmware allowed us to get */
+	rc = tf_rm_allocate_validate(tfp);
+	if (rc) {
+		/* Log error */
+		goto cleanup_close;
+	}
+
+	/* Setup hash seeds */
+	tf_seeds_init(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -195,6 +229,12 @@ tf_close_session(struct tf *tfp)
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	/* Cleanup if we're last user of the session */
+	if (tfs->ref_count == 1) {
+		/* Cleanup any outstanding resources */
+		rc_close = tf_rm_close(tfp);
+	}
+
 	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 69433ac..3455d8f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -344,4 +344,129 @@ int tf_attach_session(struct tf *tfp,
  */
 int tf_close_session(struct tf *tfp);
 
+/**
+ * @page  ident Identity Management
+ *
+ * @ref tf_alloc_identifier
+ *
+ * @ref tf_free_identifier
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for Brd4
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/** Wh+/Brd2 Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+
+	/* HW */
+
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** Brd4 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** Brd4 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** Brd4 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** Brd4 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** Brd4 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** Brd4 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** Brd4 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** Brd4 only VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	/** Future - external pool of size0 entries */
+	TF_TBL_TYPE_EXT_0,
+	TF_TBL_TYPE_MAX
+};
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
new file mode 100644
index 0000000..56767e7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -0,0 +1,1731 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_rm.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_resources.h"
+#include "tf_msg.h"
+#include "bnxt.h"
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
+ *   enum tf_dir               dir         - Direction to process
+ *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
+ *                                           in the hw query structure
+ *   define                    def_value   - Define value to check against
+ *   uint32_t                 *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
+	if ((dir) == TF_DIR_RX) {					      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	} else {							      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	}								      \
+} while (0)
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
+ *   enum tf_dir                dir         - Direction to process
+ *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
+ *                                            in the hw query structure
+ *   define                     def_value   - Define value to check against
+ *   uint32_t                  *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
+	if ((dir) == TF_DIR_RX) {					       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	} else {							       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	}								       \
+} while (0)
+
+/**
+ * Internal macro to convert a reserved resource define name to be
+ * direction specific.
+ *
+ * Parameters:
+ *   enum tf_dir    dir         - Direction to process
+ *   string         type        - Type name to append RX or TX to
+ *   string         dtype       - Direction specific type
+ *
+ *
+ */
+#define TF_RESC_RSVD(dir, type, dtype) do {	\
+		if ((dir) == TF_DIR_RX)		\
+			(dtype) = type ## _RX;	\
+		else				\
+			(dtype) = type ## _TX;	\
+	} while (0)
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		break;
+	}
+	return "Invalid identifier";
+}
+
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
+{
+	switch (sram_type) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		return "Full action";
+	case TF_RESC_TYPE_SRAM_MCG:
+		return "MCG";
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		return "Encap 8B";
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		return "Encap 16B";
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		return "Encap 64B";
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		return "Source properties SMAC";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		return "Source properties SMAC IPv4";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		return "Source properties IPv6";
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		return "Counter 64B";
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		return "NAT source port";
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		return "NAT destination port";
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		return "NAT source IPv4";
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		return "NAT destination IPv4";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+/**
+ * Helper function to perform a SRAM HCAPI resource type lookup
+ * against the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
+		break;
+	case TF_RESC_TYPE_SRAM_MCG:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
+		break;
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
+ * Helper function to print all the SRAM resource qcaps errors
+ * reported in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the sram error flags created at time of the query check
+ */
+static void
+tf_rm_print_sram_qcaps_error(enum tf_dir dir,
+			     struct tf_rm_sram_query *sram_query,
+			     uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_sram_2_str(i),
+				    sram_query->sram_query[i].max,
+				    tf_rm_rsvd_sram_value(dir, i));
+	}
+}
+
+/**
+ * Performs a HW resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to HW Query data structure. Query holds what the firmware
+ *   offers of the HW resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
+			    enum tf_dir dir,
+			    uint32_t *error_flag)
+{
+	*error_flag = 0;
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_ENTRY,
+			     TF_RSVD_RANGE_ENTRY,
+			     error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Performs a SRAM resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to SRAM Query data structure. Query holds what the
+ *   firmware offers of the SRAM resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
+			      enum tf_dir dir,
+			      uint32_t *error_flag)
+{
+	*error_flag = 0;
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_FULL_ACTION,
+			       TF_RSVD_SRAM_FULL_ACTION,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_MCG,
+			       TF_RSVD_SRAM_MCG,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_8B,
+			       TF_RSVD_SRAM_ENCAP_8B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_16B,
+			       TF_RSVD_SRAM_ENCAP_16B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_64B,
+			       TF_RSVD_SRAM_ENCAP_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC,
+			       TF_RSVD_SRAM_SP_SMAC,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			       TF_RSVD_SRAM_SP_SMAC_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			       TF_RSVD_SRAM_SP_SMAC_IPV6,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_COUNTER_64B,
+			       TF_RSVD_SRAM_COUNTER_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_SPORT,
+			       TF_RSVD_SRAM_NAT_SPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_DPORT,
+			       TF_RSVD_SRAM_NAT_DPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+			       TF_RSVD_SRAM_NAT_S_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+			       TF_RSVD_SRAM_NAT_D_IPV4,
+			       error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Internal function to mark pool entries used.
+ */
+static void
+tf_rm_reserve_range(uint32_t count,
+		    uint32_t rsv_begin,
+		    uint32_t rsv_end,
+		    uint32_t max,
+		    struct bitalloc *pool)
+{
+	uint32_t i;
+
+	/* If no resources has been requested we mark everything
+	 * 'used'
+	 */
+	if (count == 0)	{
+		for (i = 0; i < max; i++)
+			ba_alloc_index(pool, i);
+	} else {
+		/* Support 2 main modes
+		 * Reserved range starts from bottom up (with
+		 * pre-reserved value or not)
+		 * - begin = 0 to end xx
+		 * - begin = 1 to end xx
+		 *
+		 * Reserved range starts from top down
+		 * - begin = yy to end max
+		 */
+
+		/* Bottom up check, start from 0 */
+		if (rsv_begin == 0) {
+			for (i = rsv_end + 1; i < max; i++)
+				ba_alloc_index(pool, i);
+		}
+
+		/* Bottom up check, start from 1 or higher OR
+		 * Top Down
+		 */
+		if (rsv_begin >= 1) {
+			/* Allocate from 0 until start */
+			for (i = 0; i < rsv_begin; i++)
+				ba_alloc_index(pool, i);
+
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the full action resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
+	uint16_t end = 0;
+
+	/* full action rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_RX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
+
+	/* full action tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_TX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the multicast group resources
+ * allocated that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
+	uint16_t end = 0;
+
+	/* multicast group rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_MCG_RX,
+			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
+
+	/* Multicast Group on TX is not supported */
+}
+
+/**
+ * Internal function to mark all the encap resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_encap(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
+	uint16_t end = 0;
+
+	/* encap 8b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_RX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
+
+	/* encap 8b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_TX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
+
+	/* encap 16b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_RX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
+
+	/* encap 16b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_TX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
+
+	/* Encap 64B not supported on RX */
+
+	/* Encap 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_64B_TX,
+			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_sp(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
+	uint16_t end = 0;
+
+	/* sp smac rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_RX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
+
+	/* sp smac tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_TX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+
+	/* SP SMAC IPv4 not supported on RX */
+
+	/* sp smac ipv4 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+
+	/* SP SMAC IPv6 not supported on RX */
+
+	/* sp smac ipv6 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the stat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_stats(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
+	uint16_t end = 0;
+
+	/* counter 64b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_RX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
+
+	/* counter 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_TX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the nat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_nat(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
+	uint16_t end = 0;
+
+	/* nat source port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_RX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
+
+	/* nat source port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_TX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
+
+	/* nat destination port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_RX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
+
+	/* nat destination port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_TX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+
+	/* nat source port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
+
+	/* nat source ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+
+	/* nat destination port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
+
+	/* nat destination ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
+}
+
+/**
+ * Internal function used to validate the HW allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_hw_alloc_validate(enum tf_dir dir,
+			struct tf_rm_hw_alloc *hw_alloc,
+			struct tf_rm_entry *hw_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				hw_alloc->hw_num[i],
+				hw_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to validate the SRAM allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
+			  struct tf_rm_sram_alloc *sram_alloc,
+			  struct tf_rm_entry *sram_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				sram_alloc->sram_num[i],
+				sram_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to mark all the HW resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_reserve_hw(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_l2_func(tfs);
+}
+
+/**
+ * Internal function used to mark all the SRAM resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_reserve_sram(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_sram_full_action(tfs);
+	tf_rm_rsvd_sram_mcg(tfs);
+	tf_rm_rsvd_sram_encap(tfs);
+	tf_rm_rsvd_sram_sp(tfs);
+	tf_rm_rsvd_sram_stats(tfs);
+	tf_rm_rsvd_sram_nat(tfs);
+}
+
+/**
+ * Internal function used to allocate and validate all HW resources.
+ */
+static int
+tf_rm_allocate_validate_hw(struct tf *tfp,
+			   enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_hw_query hw_query;
+	struct tf_rm_hw_alloc hw_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *hw_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		hw_entries = tfs->resc.rx.hw_entry;
+	else
+		hw_entries = tfs->resc.tx.hw_entry;
+
+	/* Query for Session HW Resources */
+	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		goto cleanup;
+	}
+
+	/* Post process HW capability */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
+		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
+
+	/* Allocate Session HW Resources */
+	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform HW allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Internal function used to allocate and validate all SRAM resources.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0  - Success
+ *   -1 - Internal error
+ */
+static int
+tf_rm_allocate_validate_sram(struct tf *tfp,
+			     enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_sram_query sram_query;
+	struct tf_rm_sram_alloc sram_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *sram_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a SRAM resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master SRAM Resource data base
+ *
+ * [in/out] flush_entries
+ *   Pruned SRAM Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master SRAM
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_sram_to_flush(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    struct tf_rm_entry *sram_entries,
+		    struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the sram resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_FULL_ACTION_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for RX direction */
+	if (dir == TF_DIR_RX) {
+		TF_RM_GET_POOLS_RX(tfs, &pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune TX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_8B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_16B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_SP_SMAC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_STATS_64B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_SPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_DPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_S_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_D_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
+}
+
+/**
+ * Helper function used to generate an error log for the SRAM types
+ * that needs to be flushed. The types should have been cleaned up
+ * ahead of invoking tf_close_session.
+ *
+ * [in] sram_entries
+ *   SRAM Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_sram_flush(enum tf_dir dir,
+		     struct tf_rm_entry *sram_entries)
+{
+	int i;
+
+	/* Walk the sram flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_sram_2_str(i));
+	}
+}
+
+void
+tf_rm_init(struct tf *tfp __rte_unused)
+{
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	/* This version is host specific and should be checked against
+	 * when attaching as there is no guarantee that a secondary
+	 * would run from same image version.
+	 */
+	tfs->ver.major = TF_SESSION_VER_MAJOR;
+	tfs->ver.minor = TF_SESSION_VER_MINOR;
+	tfs->ver.update = TF_SESSION_VER_UPDATE;
+
+	tfs->session_id.id = 0;
+	tfs->ref_count = 0;
+
+	/* Initialization of Table Scopes */
+	/* ll_init(&tfs->tbl_scope_ll); */
+
+	/* Initialization of HW and SRAM resource DB */
+	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
+
+	/* Initialization of HW Resource Pools */
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/* Initialization of SRAM Resource Pools
+	 * These pools are set to the TFLIB defined MAX sizes not
+	 * AFM's HW max as to limit the memory consumption
+	 */
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		TF_RSVD_SRAM_FULL_ACTION_RX);
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		TF_RSVD_SRAM_FULL_ACTION_TX);
+	/* Only Multicast Group on RX is supported */
+	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
+		TF_RSVD_SRAM_MCG_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_8B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_8B_TX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_16B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_16B_TX);
+	/* Only Encap 64B on TX is supported */
+	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_64B_TX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		TF_RSVD_SRAM_SP_SMAC_RX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_TX);
+	/* Only SP SMAC IPv4 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+	/* Only SP SMAC IPv6 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
+		TF_RSVD_SRAM_COUNTER_64B_RX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_COUNTER_64B_TX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_SPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_SPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_DPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_DPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_S_IPV4_TX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/* Initialization of pools local to TF Core */
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+}
+
+int
+tf_rm_allocate_validate(struct tf *tfp)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = tf_rm_allocate_validate_hw(tfp, i);
+		if (rc)
+			return rc;
+		rc = tf_rm_allocate_validate_sram(tfp, i);
+		if (rc)
+			return rc;
+	}
+
+	/* With both HW and SRAM allocated and validated we can
+	 * 'scrub' the reservation on the pools.
+	 */
+	tf_rm_reserve_hw(tfp);
+	tf_rm_reserve_sram(tfp);
+
+	return rc;
+}
+
+int
+tf_rm_close(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	int i;
+	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *sram_entries;
+	struct tf_rm_entry *sram_flush_entries;
+	struct tf_session *tfs __rte_unused =
+		(struct tf_session *)(tfp->session->core_data);
+
+	struct tf_rm_db flush_resc = tfs->resc;
+
+	/* On close it is assumed that the session has already cleaned
+	 * up all its resources, individually, while destroying its
+	 * flows. No checking is performed thus the behavior is as
+	 * follows.
+	 *
+	 * Session RM will signal FW to release session resources. FW
+	 * will perform invalidation of all the allocated entries
+	 * (assures any outstanding resources has been cleared, then
+	 * free the FW RM instance.
+	 *
+	 * Session will then be freed by tf_close_session() thus there
+	 * is no need to clean each resource pool as the whole session
+	 * is going away.
+	 */
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		if (i == TF_DIR_RX) {
+			hw_entries = tfs->resc.rx.hw_entry;
+			sram_entries = tfs->resc.rx.sram_entry;
+			sram_flush_entries = flush_resc.rx.sram_entry;
+		} else {
+			hw_entries = tfs->resc.tx.hw_entry;
+			sram_entries = tfs->resc.tx.sram_entry;
+			sram_flush_entries = flush_resc.tx.sram_entry;
+		}
+
+		/* Check for any not previously freed SRAM resources
+		 * and flush if required.
+		 */
+		rc = tf_rm_sram_to_flush(tfs,
+					 i,
+					 sram_entries,
+					 sram_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_sram_flush(i, sram_flush_entries);
+
+			rc = tf_msg_session_sram_resc_flush(tfp,
+							    i,
+							    sram_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
+		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, HW free failed\n",
+				    tf_dir_2_str(i));
+		}
+
+		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, SRAM free failed\n",
+				    tf_dir_2_str(i));
+		}
+	}
+
+	return rc_close;
+}
+
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type)
+{
+	int rc = 0;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
+		break;
+	case TF_TBL_TYPE_UPAR:
+		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
+		break;
+	case TF_TBL_TYPE_METADATA:
+		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
+		break;
+	case TF_TBL_TYPE_LAG:
+		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		*hcapi_type = -1;
+		rc = -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index)
+{
+	int rc;
+	struct tf_rm_resc *resc;
+	uint32_t hcapi_type;
+	uint32_t base_index;
+
+	if (dir == TF_DIR_RX)
+		resc = &tfs->resc.rx;
+	else if (dir == TF_DIR_TX)
+		resc = &tfs->resc.tx;
+	else
+		return -EOPNOTSUPP;
+
+	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
+	if (rc)
+		return -1;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+	case TF_TBL_TYPE_MCAST_GROUPS:
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+	case TF_TBL_TYPE_ACT_STATS_64:
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		base_index = resc->sram_entry[hcapi_type].start;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+	case TF_TBL_TYPE_METER_PROF:
+	case TF_TBL_TYPE_METER_INST:
+	case TF_TBL_TYPE_UPAR:
+	case TF_TBL_TYPE_EPOCH0:
+	case TF_TBL_TYPE_EPOCH1:
+	case TF_TBL_TYPE_METADATA:
+	case TF_TBL_TYPE_CT_STATE:
+	case TF_TBL_TYPE_RANGE_PROF:
+	case TF_TBL_TYPE_RANGE_ENTRY:
+	case TF_TBL_TYPE_LAG:
+		base_index = resc->hw_entry[hcapi_type].start;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	switch (c_type) {
+	case TF_RM_CONVERT_RM_BASE:
+		*convert_index = index - base_index;
+		break;
+	case TF_RM_CONVERT_ADD_BASE:
+		*convert_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 57ce19b..e69d443 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -107,6 +107,54 @@ struct tf_rm_sram_alloc {
 };
 
 /**
+ * Resource Manager arrays for a single direction
+ */
+struct tf_rm_resc {
+	/** array of HW resource entries */
+	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
+	/** array of SRAM resource entries */
+	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource Manager Database
+ */
+struct tf_rm_db {
+	struct tf_rm_resc rx;
+	struct tf_rm_resc tx;
+};
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function used to convert HW HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+
+/**
+ * Helper function used to convert SRAM HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+
+/**
  * Initializes the Resource Manager and the associated database
  * entries for HW and SRAM resources. Must be called before any other
  * Resource Manager functions.
@@ -143,4 +191,131 @@ int tf_rm_allocate_validate(struct tf *tfp);
  *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
 int tf_rm_close(struct tf *tfp);
+
+#if (TF_SHADOW == 1)
+/**
+ * Initializes Shadow DB of configuration elements
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * Returns:
+ *  0  - Success
+ */
+int tf_rm_shadow_db_init(struct tf_session *tfs);
+#endif /* TF_SHADOW */
+
+/**
+ * Perform a Session Pool lookup using the Tcam table type.
+ *
+ * Function will print error msg if tcam type is unsupported or lookup
+ * failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool);
+
+/**
+ * Perform a Session Pool lookup using the Table type.
+ *
+ * Function will print error msg if table type is unsupported or
+ * lookup failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool);
+
+/**
+ * Converts the TF Table Type to internal HCAPI_TYPE
+ *
+ * [in] type
+ *   Type to be converted
+ *
+ * [in/out] hcapi_type
+ *   Converted type
+ *
+ * Returns:
+ *  0           - Success will set the *hcapi_type
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type);
+
+/**
+ * TF RM Convert of index methods.
+ */
+enum tf_rm_convert_type {
+	/** Adds the base of the Session Pool to the index */
+	TF_RM_CONVERT_ADD_BASE,
+	/** Removes the Session Pool base from the index */
+	TF_RM_CONVERT_RM_BASE
+};
+
+/**
+ * Provides conversion of the Table Type index in relation to the
+ * Session Pool base.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] c_type
+ *   Type of conversion to perform
+ *
+ * [in] index
+ *   Index to be converted
+ *
+ * [in/out]  convert_index
+ *   Pointer to the converted index
+ */
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index);
+
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 651d3ee..34b6c41 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -76,11 +76,215 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Session HW and SRAM resources */
+	struct tf_rm_db resc;
+
+	/* Session HW resource pools */
+
+	/** RX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/** RX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	/** TX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+
+	/** RX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	/** TX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+
+	/** RX EM Profile ID Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	/** TX EM Key Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/** RX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	/** TX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records are not controlled by way of a pool */
+
+	/** RX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	/** TX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+
+	/** RX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	/** TX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+
+	/** RX Meter Instance Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	/** TX Meter Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+
+	/** RX Mirror Configuration Pool*/
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	/** RX Mirror Configuration Pool */
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+
+	/** RX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	/** TX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	/** RX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	/** TX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	/** RX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	/** TX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	/** RX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	/** TX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+
+	/** RX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	/** TX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+
+	/** RX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	/** TX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+
+	/** RX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	/** TX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+
+	/** RX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	/** TX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+
+	/** RX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	/** TX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+
+	/** RX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	/** TX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+
+	/** RX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	/** TX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
+
+	/* Session SRAM pools */
+
+	/** RX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		      TF_RSVD_SRAM_FULL_ACTION_RX);
+	/** TX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		      TF_RSVD_SRAM_FULL_ACTION_TX);
+
+	/** RX Multicast Group Pool, only RX is supported */
+	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
+		      TF_RSVD_SRAM_MCG_RX);
+
+	/** RX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_8B_RX);
+	/** TX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_8B_TX);
+
+	/** RX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_16B_RX);
+	/** TX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_16B_TX);
+
+	/** TX Encap 64B Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_64B_TX);
+
+	/** RX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		      TF_RSVD_SRAM_SP_SMAC_RX);
+	/** TX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_TX);
+
+	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+
+	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+
+	/** RX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_COUNTER_64B_RX);
+	/** TX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_COUNTER_64B_TX);
+
+	/** RX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_SPORT_RX);
+	/** TX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_SPORT_TX);
+
+	/** RX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_DPORT_RX);
+	/** TX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_DPORT_TX);
+
+	/** RX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	/** TX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
+
+	/** RX NAT Destination IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	/** TX NAT IPv4 Destination Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/**
+	 * Pools not allocated from HCAPI RM
+	 */
+
+	/** RX L2 Ctx Remap ID  Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 Ctx Remap ID Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
 	/** CRC32 seed table */
 #define TF_LKUP_SEED_MEM_SIZE 512
 	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 };
+
 #endif /* _TF_SESSION_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 08/34] net/bnxt: add resource manager functionality
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (6 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
                       ` (26 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow RM functionality for resource handling
- Update the TruFlow Resource Manager (RM) with resource
  support functions for debugging as well as resource cleanup.
- Add support for Internal and external pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c    |   14 +
 drivers/net/bnxt/tf_core/tf_core.h    |   26 +
 drivers/net/bnxt/tf_core/tf_rm.c      | 1718 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h |   10 +
 drivers/net/bnxt/tf_core/tf_tbl.h     |   43 +
 5 files changed, 1735 insertions(+), 76 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 7d76efa..bb6d38b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -149,6 +149,20 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Shadow DB configuration */
+	if (parms->shadow_copy) {
+		/* Ignore shadow_copy setting */
+		session->shadow_copy = 0;/* parms->shadow_copy; */
+#if (TF_SHADOW == 1)
+		rc = tf_rm_shadow_db_init(tfs);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%d",
+				    rc);
+		/* Add additional processing */
+#endif /* TF_SHADOW */
+	}
+
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3455d8f..16c8251 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -30,6 +30,32 @@ enum tf_dir {
 	TF_DIR_MAX
 };
 
+/**
+ * External pool size
+ *
+ * Defines a single pool of external action records of
+ * fixed size.  Currently, this is an index.
+ */
+#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
+
+/**
+ *  External pool entry count
+ *
+ *  Defines the number of entries in the external action pool
+ */
+#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
+
+/**
+ * Number of external pools
+ */
+#define TF_EXT_POOL_CNT_MAX 1
+
+/**
+ * External pool Id
+ */
+#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
+#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 56767e7..a5e96f29 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -104,9 +104,82 @@ const char
 	case TF_IDENT_TYPE_L2_FUNC:
 		return "l2_func";
 	default:
-		break;
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
+{
+	switch (hw_type) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		return "L2 ctxt tcam";
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		return "Profile Func";
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		return "Profile tcam";
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		return "EM profile id";
+	case TF_RESC_TYPE_HW_EM_REC:
+		return "EM record";
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		return "WC tcam profile id";
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		return "WC tcam";
+	case TF_RESC_TYPE_HW_METER_PROF:
+		return "Meter profile";
+	case TF_RESC_TYPE_HW_METER_INST:
+		return "Meter instance";
+	case TF_RESC_TYPE_HW_MIRROR:
+		return "Mirror";
+	case TF_RESC_TYPE_HW_UPAR:
+		return "UPAR";
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		return "Source properties tcam";
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		return "L2 Function";
+	case TF_RESC_TYPE_HW_FKB:
+		return "FKB";
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		return "Table scope";
+	case TF_RESC_TYPE_HW_EPOCH0:
+		return "EPOCH0";
+	case TF_RESC_TYPE_HW_EPOCH1:
+		return "EPOCH1";
+	case TF_RESC_TYPE_HW_METADATA:
+		return "Metadata";
+	case TF_RESC_TYPE_HW_CT_STATE:
+		return "Connection tracking state";
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		return "Range profile";
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		return "Range entry";
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		return "LAG";
+	default:
+		return "Invalid identifier";
 	}
-	return "Invalid identifier";
 }
 
 const char
@@ -145,6 +218,93 @@ const char
 }
 
 /**
+ * Helper function to perform a HW HCAPI resource type lookup against
+ * the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_REC:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_INST:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
+		break;
+	case TF_RESC_TYPE_HW_MIRROR:
+		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
+		break;
+	case TF_RESC_TYPE_HW_UPAR:
+		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
+		break;
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_FKB:
+		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
+		break;
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH0:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH1:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
+		break;
+	case TF_RESC_TYPE_HW_METADATA:
+		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
+		break;
+	case TF_RESC_TYPE_HW_CT_STATE:
+		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
+		break;
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
  * Helper function to perform a SRAM HCAPI resource type lookup
  * against the reserved value of the same static type.
  *
@@ -205,6 +365,36 @@ tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
 }
 
 /**
+ * Helper function to print all the HW resource qcaps errors reported
+ * in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the hw error flags created at time of the query check
+ */
+static void
+tf_rm_print_hw_qcaps_error(enum tf_dir dir,
+			   struct tf_rm_hw_query *hw_query,
+			   uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_hw_2_str(i),
+				    hw_query->hw_query[i].max,
+				    tf_rm_rsvd_hw_value(dir, i));
+	}
+}
+
+/**
  * Helper function to print all the SRAM resource qcaps errors
  * reported in the error_flag.
  *
@@ -264,12 +454,139 @@ tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
 			    uint32_t *error_flag)
 {
 	*error_flag = 0;
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+			     TF_RSVD_L2_CTXT_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_FUNC,
+			     TF_RSVD_PROF_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_TCAM,
+			     TF_RSVD_PROF_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_PROF_ID,
+			     TF_RSVD_EM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_REC,
+			     TF_RSVD_EM_REC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+			     TF_RSVD_WC_TCAM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM,
+			     TF_RSVD_WC_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_PROF,
+			     TF_RSVD_METER_PROF,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_INST,
+			     TF_RSVD_METER_INST,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_MIRROR,
+			     TF_RSVD_MIRROR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_UPAR,
+			     TF_RSVD_UPAR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_SP_TCAM,
+			     TF_RSVD_SP_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_FUNC,
+			     TF_RSVD_L2_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_FKB,
+			     TF_RSVD_FKB,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_TBL_SCOPE,
+			     TF_RSVD_TBL_SCOPE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH0,
+			     TF_RSVD_EPOCH0,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH1,
+			     TF_RSVD_EPOCH1,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METADATA,
+			     TF_RSVD_METADATA,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_CT_STATE,
+			     TF_RSVD_CT_STATE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_PROF,
+			     TF_RSVD_RANGE_PROF,
+			     error_flag);
+
 	TF_RM_CHECK_HW_ALLOC(query,
 			     dir,
 			     TF_RESC_TYPE_HW_RANGE_ENTRY,
 			     TF_RSVD_RANGE_ENTRY,
 			     error_flag);
 
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_LAG_ENTRY,
+			     TF_RSVD_LAG_ENTRY,
+			     error_flag);
+
 	if (*error_flag != 0)
 		return -ENOMEM;
 
@@ -434,26 +751,584 @@ tf_rm_reserve_range(uint32_t count,
 			for (i = 0; i < rsv_begin; i++)
 				ba_alloc_index(pool, i);
 
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the profile tcam and profile func
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
+	uint32_t end = 0;
+
+	/* profile func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
+
+	/* profile func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_PROF_TCAM;
+
+	/* profile tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
+
+	/* profile tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the em profile id allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_em_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
+	uint32_t end = 0;
+
+	/* em prof id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
+
+	/* em prof id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the wildcard tcam and profile id
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_wc(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
+	uint32_t end = 0;
+
+	/* wc profile id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
+
+	/* wc profile id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_WC_TCAM;
+
+	/* wc tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_RX);
+
+	/* wc tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the meter resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_meter(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
+	uint32_t end = 0;
+
+	/* meter profiles rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_RX);
+
+	/* meter profiles tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_METER_INST;
+
+	/* meter rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_RX);
+
+	/* meter tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the mirror resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_mirror(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
+	uint32_t end = 0;
+
+	/* mirror rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_RX);
+
+	/* mirror tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the upar resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_upar(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_UPAR;
+	uint32_t end = 0;
+
+	/* upar rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_RX);
+
+	/* upar tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp tcam resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
+	uint32_t end = 0;
+
+	/* sp tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_RX);
+
+	/* sp tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the fkb resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_fkb(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_FKB;
+	uint32_t end = 0;
+
+	/* fkb rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_RX);
+
+	/* fkb tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the tbld scope resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
+	uint32_t end = 0;
+
+	/* tbl scope rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
+
+	/* tbl scope tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 epoch resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_epoch(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
+	uint32_t end = 0;
+
+	/* epoch0 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_RX);
+
+	/* epoch0 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_EPOCH1;
+
+	/* epoch1 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_RX);
+
+	/* epoch1 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the metadata resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_metadata(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METADATA;
+	uint32_t end = 0;
+
+	/* metadata rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_RX);
+
+	/* metadata tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the ct state resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_ct_state(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
+	uint32_t end = 0;
+
+	/* ct state rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_RX);
+
+	/* ct state tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
+ * Internal function to mark all the range resources allocated that
+ * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+tf_rm_rsvd_range(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
 	uint32_t end = 0;
 
-	/* l2 ctxt rx direction */
+	/* range profile rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -461,10 +1336,10 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
 
-	/* l2 ctxt tx direction */
+	/* range profile tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -472,21 +1347,45 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
+
+	/* range entry rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
+
+	/* range entry tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 func resources allocated that
+ * Internal function to mark all the lag resources allocated that
  * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
+tf_rm_rsvd_lag_entry(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
 	uint32_t end = 0;
 
-	/* l2 func rx direction */
+	/* lag entry rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -494,10 +1393,10 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
 
-	/* l2 func tx direction */
+	/* lag entry tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -505,8 +1404,8 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
 }
 
 /**
@@ -909,7 +1808,21 @@ tf_rm_reserve_hw(struct tf *tfp)
 	 * used except the resources that Truflow took ownership off.
 	 */
 	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_prof(tfs);
+	tf_rm_rsvd_em_prof(tfs);
+	tf_rm_rsvd_wc(tfs);
+	tf_rm_rsvd_mirror(tfs);
+	tf_rm_rsvd_meter(tfs);
+	tf_rm_rsvd_upar(tfs);
+	tf_rm_rsvd_sp_tcam(tfs);
 	tf_rm_rsvd_l2_func(tfs);
+	tf_rm_rsvd_fkb(tfs);
+	tf_rm_rsvd_tbl_scope(tfs);
+	tf_rm_rsvd_epoch(tfs);
+	tf_rm_rsvd_metadata(tfs);
+	tf_rm_rsvd_ct_state(tfs);
+	tf_rm_rsvd_range(tfs);
+	tf_rm_rsvd_lag_entry(tfs);
 }
 
 /**
@@ -972,6 +1885,7 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
 			tf_dir_2_str(dir),
 			error_flag);
+		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
 
@@ -1032,65 +1946,388 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	struct tf_rm_entry *sram_entries;
 	uint32_t error_flag;
 
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a HW resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master HW Resource database
+ *
+ * [in/out] flush_entries
+ *   Pruned HW Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master HW
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_hw_to_flush(struct tf_session *tfs,
+		  enum tf_dir dir,
+		  struct tf_rm_entry *hw_entries,
+		  struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the hw resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_CTXT_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_INST_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_MIRROR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_UPAR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SP_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_FKB_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
+		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_TBL_SCOPE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
+	} else {
+		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+			    tf_dir_2_str(dir),
+			    free_cnt,
+			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
+		flush_rc = 1;
 	}
 
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
-			tf_dir_2_str(dir),
-			error_flag);
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH0_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH1_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METADATA_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_CT_STATE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	return 0;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
- cleanup:
-	return -1;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_LAG_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
 }
 
 /**
@@ -1335,6 +2572,32 @@ tf_rm_sram_to_flush(struct tf_session *tfs,
 }
 
 /**
+ * Helper function used to generate an error log for the HW types that
+ * needs to be flushed. The types should have been cleaned up ahead of
+ * invoking tf_close_session.
+ *
+ * [in] hw_entries
+ *   HW Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_hw_flush(enum tf_dir dir,
+		   struct tf_rm_entry *hw_entries)
+{
+	int i;
+
+	/* Walk the hw flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_hw_2_str(i));
+	}
+}
+
+/**
  * Helper function used to generate an error log for the SRAM types
  * that needs to be flushed. The types should have been cleaned up
  * ahead of invoking tf_close_session.
@@ -1386,6 +2649,53 @@ tf_rm_init(struct tf *tfp __rte_unused)
 	/* Initialization of HW Resource Pools */
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records should not be controlled by way of a pool */
+
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
 
 	/* Initialization of SRAM Resource Pools
 	 * These pools are set to the TFLIB defined MAX sizes not
@@ -1476,6 +2786,7 @@ tf_rm_close(struct tf *tfp)
 	int rc_close = 0;
 	int i;
 	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *hw_flush_entries;
 	struct tf_rm_entry *sram_entries;
 	struct tf_rm_entry *sram_flush_entries;
 	struct tf_session *tfs __rte_unused =
@@ -1501,14 +2812,41 @@ tf_rm_close(struct tf *tfp)
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		if (i == TF_DIR_RX) {
 			hw_entries = tfs->resc.rx.hw_entry;
+			hw_flush_entries = flush_resc.rx.hw_entry;
 			sram_entries = tfs->resc.rx.sram_entry;
 			sram_flush_entries = flush_resc.rx.sram_entry;
 		} else {
 			hw_entries = tfs->resc.tx.hw_entry;
+			hw_flush_entries = flush_resc.tx.hw_entry;
 			sram_entries = tfs->resc.tx.sram_entry;
 			sram_flush_entries = flush_resc.tx.sram_entry;
 		}
 
+		/* Check for any not previously freed HW resources and
+		 * flush if required.
+		 */
+		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering HW resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_hw_flush(i, hw_flush_entries);
+			rc = tf_msg_session_hw_resc_flush(tfp,
+							  i,
+							  hw_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
 		/* Check for any not previously freed SRAM resources
 		 * and flush if required.
 		 */
@@ -1560,6 +2898,234 @@ tf_rm_close(struct tf *tfp)
 	return rc_close;
 }
 
+#if (TF_SHADOW == 1)
+int
+tf_rm_shadow_db_init(struct tf_session *tfs)
+{
+	rc = 1;
+
+	return rc;
+}
+#endif /* TF_SHADOW */
+
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_L2_CTXT_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_PROF_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_WC_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Tcam type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "%s:, Tcam type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_FULL_ACTION_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_TX)
+			break;
+		TF_RM_GET_POOLS_RX(tfs, pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_8B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_16B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		/* No pools for RX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_SP_SMAC_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_STATS_64B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_SPORT_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_S_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_D_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_INST_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_MIRROR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_UPAR:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_UPAR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH0_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH1_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METADATA:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METADATA_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_CT_STATE_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_ENTRY_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_LAG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_LAG_ENTRY_POOL_NAME,
+				rc);
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+		break;
+	/* No bitalloc pools for these types */
+	case TF_TBL_TYPE_EXT:
+	case TF_TBL_TYPE_EXT_0:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type lookup failed, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_rm_convert_tbl_type(enum tf_tbl_type type,
 		       uint32_t *hcapi_type)
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 34b6c41..fed34f1 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -12,6 +12,7 @@
 #include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_tbl.h"
 
 /** Session defines
  */
@@ -285,6 +286,15 @@ struct tf_session {
 
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+	/** Table scope array */
+	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/** Each external pool is associated with a single table scope
+	 *  For each external pool store the associated table scope in
+	 *  this data structure
+	 */
+	uint32_t ext_pool_2_scope[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
new file mode 100644
index 0000000..5a5e72f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TBL_H_
+#define _TF_TBL_H_
+
+#include <stdint.h>
+
+enum tf_pg_tbl_lvl {
+	PT_LVL_0,
+	PT_LVL_1,
+	PT_LVL_2,
+	PT_LVL_MAX
+};
+
+/** Invalid table scope id */
+#define TF_TBL_SCOPE_INVALID 0xffffffff
+
+/**
+ * Table Scope Control Block
+ *
+ * Holds private data for a table scope. Only one instance of a table
+ * scope with Internal EM is supported.
+ */
+struct tf_tbl_scope_cb {
+	uint32_t tbl_scope_id;
+	int index;
+	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
+};
+
+/**
+ * Initialize table pool structure to indicate
+ * no table scope has been associated with the
+ * external pool of indexes.
+ *
+ * [in] session
+ */
+void
+tf_init_tbl_pool(struct tf_session *session);
+
+#endif /* _TF_TBL_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 09/34] net/bnxt: add tf core identifier support
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (7 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
                       ` (25 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith

From: Farah Smith <farah.smith@broadcom.com>

- Add TruFlow Identifier resource support
- Add TruFlow public API for Identifier resources.
- Add support code and stack for Identifier resource allocation control.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 156 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h |  55 +++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  13 ++++
 3 files changed, 224 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index bb6d38b..7b027f7 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -284,3 +284,159 @@ tf_close_session(struct tf *tfp)
 
 	return rc_close;
 }
+
+/** allocate identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	int id;
+	int rc;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	id = ba_alloc(session_pool);
+
+	if (id == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return -ENOMEM;
+	}
+	parms->id = id;
+	return 0;
+}
+
+/** free identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	int rc;
+	int ba_rc;
+	struct tf_session *tfs;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	ba_rc = ba_inuse(session_pool, (int)parms->id);
+
+	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type),
+			    parms->id);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->id);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 16c8251..afad9ea 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -402,6 +402,61 @@ enum tf_identifier_type {
 	TF_IDENT_TYPE_L2_FUNC
 };
 
+/** tf_alloc_identifier parameter definition
+ */
+struct tf_alloc_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/** tf_free_identifier parameter definition
+ */
+struct tf_free_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/** allocate identifier resource
+ *
+ * TruFlow core will allocate a free id from the per identifier resource type
+ * pool reserved for the session during tf_open().  No firmware is involved.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms);
+
+/** free identifier resource
+ *
+ * TruFlow core will return an id back to the per identifier resource type pool
+ * reserved for the session.  No firmware is involved.  During tf_close, the
+ * complete pool is returned to the firmware.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms);
+
 /**
  * TCAM table type
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 4ce2bc5..c44f96f 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -94,6 +94,19 @@
 } while (0)
 
 /**
+ * This is the MAX data we can transport across regular HWRM
+ */
+#define TF_PCI_BUF_SIZE_MAX 88
+
+/**
+ * If data bigger than TF_PCI_BUF_SIZE_MAX then use DMA method
+ */
+struct tf_msg_dma_buf {
+	void *va_addr;
+	uint64_t pa_addr;
+};
+
+/**
  * Sends session open request to TF Firmware
  */
 int
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 10/34] net/bnxt: add tf core TCAM support
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (8 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
                       ` (24 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Jay Ding

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow TCAM public API functions
- Add TCAM support functions as well as public APIs.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 163 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h | 227 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  | 159 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  30 +++++
 4 files changed, 579 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 7b027f7..39f4a11 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -440,3 +440,166 @@ int tf_free_identifier(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tcam_entry(struct tf *tfp,
+		    struct tf_alloc_tcam_entry_parms *parms)
+{
+	int rc;
+	int index;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
+	}
+
+	parms->idx = index;
+	return 0;
+}
+
+int
+tf_set_tcam_entry(struct tf *tfp,
+		  struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	int id;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/*
+	 * Each tcam send msg function should check for key sizes range
+	 */
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, parms->idx);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "%s: %s: Invalid or not allocated index, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+
+	return rc;
+}
+
+int
+tf_get_tcam_entry(struct tf *tfp __rte_unused,
+		  struct tf_get_tcam_entry_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+int
+tf_free_tcam_entry(struct tf *tfp,
+		   struct tf_free_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	rc = ba_inuse(session_pool, (int)parms->idx);
+	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->idx);
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index afad9ea..1431d06 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -472,6 +472,233 @@ enum tf_tcam_tbl_type {
 };
 
 /**
+ * @page tcam TCAM Access
+ *
+ * @ref tf_alloc_tcam_entry
+ *
+ * @ref tf_set_tcam_entry
+ *
+ * @ref tf_get_tcam_entry
+ *
+ * @ref tf_free_tcam_entry
+ */
+
+/** tf_alloc_tcam_entry parameter definition
+ */
+struct tf_alloc_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/** allocate TCAM entry
+ *
+ * Allocate a TCAM entry - one of these types:
+ *
+ * L2 Context
+ * Profile TCAM
+ * WC TCAM
+ * VEB TCAM
+ *
+ * This function allocates a TCAM table record.	 This function
+ * will attempt to allocate a TCAM table entry from the session
+ * owned TCAM entries or search a shadow copy of the TCAM table for a
+ * matching entry if search is enabled.	 Key, mask and result must match for
+ * hit to be set.  Only TruFlow core data is accessed.
+ * A hash table to entry mapping is maintained for search purposes.  If
+ * search is not enabled, the first available free entry is returned based
+ * on priority and alloc_cnt is set to 1.  If search is enabled and a matching
+ * entry to entry_data is found, hit is set to TRUE and alloc_cnt is set to 1.
+ * RefCnt is also returned.
+ *
+ * Also returns success or failure code.
+ */
+int tf_alloc_tcam_entry(struct tf *tfp,
+			struct tf_alloc_tcam_entry_parms *parms);
+
+/** tf_set_tcam_entry parameter definition
+ */
+struct	tf_set_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] base index of the entry to program
+	 */
+	uint16_t idx;
+	/**
+	 * [in] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [in] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** set TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tcam_entry(struct tf	*tfp,
+		      struct tf_set_tcam_entry_parms *parms);
+
+/** tf_get_tcam_entry parameter definition
+ */
+struct tf_get_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type  tcam_tbl_type;
+	/**
+	 * [in] index of the entry to get
+	 */
+	uint16_t idx;
+	/**
+	 * [out] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [out] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [out] key size in bits
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [out] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** get TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_get_tcam_entry(struct tf *tfp,
+		      struct tf_get_tcam_entry_parms *parms);
+
+/** tf_free_tcam_entry parameter definition
+ */
+struct tf_free_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t idx;
+	/**
+	 * [out] reference count after free
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free TCAM entry
+ *
+ * Free TCAM entry.
+ *
+ * Firmware checks to ensure the TCAM entries are owned by the TruFlow
+ * session.  TCAM entry will be invalidated.  All-ones mask.
+ * writes to hw.
+ *
+ * WCTCAM profile id of 0 must be used to invalidate an entry.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tcam_entry(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *parms);
+
+/**
+ * @page table Table Access
+ *
+ * @ref tf_alloc_tbl_entry
+ *
+ * @ref tf_free_tbl_entry
+ *
+ * @ref tf_set_tbl_entry
+ *
+ * @ref tf_get_tbl_entry
+ */
+
+/**
  * Enumeration of TruFlow table types. A table type is used to identify a
  * resource object.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c44f96f..f4b2f4c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -106,6 +106,39 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
+static int
+tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
+		   uint32_t *hwrm_type)
+{
+	int rc = 0;
+
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		rc = -EOPNOTSUPP;
+		break;
+	}
+
+	return rc;
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -835,3 +868,129 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
+static int
+tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate tcam dma entry, rc:%d\n",
+			    rc);
+		return -ENOMEM;
+	}
+
+	buf->pa_addr = (uint64_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	uint16_t key_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	uint16_t result_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
+
+	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
+
+	req.key_size = key_bytes;
+	req.mask_offset = key_bytes;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * key_bytes;
+	req.result_size = result_bytes;
+	data_size = 2 * req.key_size + req.result_size;
+
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_get_dma_buf(&buf, data_size);
+		if (rc != 0)
+			return rc;
+		data = buf.va_addr;
+		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+	}
+
+	memcpy(&data[0], parms->key, key_bytes);
+	memcpy(&data[key_bytes], parms->mask, key_bytes);
+	memcpy(&data[req.result_offset], parms->result, result_bytes);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &mparms);
+	if (rc)
+		return rc;
+
+	if (buf.va_addr != NULL)
+		tfp_free(buf.va_addr);
+
+	return rc;
+}
+
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *in_parms)
+{
+	int rc;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
+
+	parms.tf_type = HWRM_TF_TCAM_FREE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 057de84..fa74d78 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -120,4 +120,34 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends tcam entry 'set' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_set(struct tf *tfp,
+			  struct tf_set_tcam_entry_parms *parms);
+
+/**
+ * Sends tcam entry 'free' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_free(struct tf *tfp,
+			   struct tf_free_tcam_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 11/34] net/bnxt: add tf core table scope support
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (9 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
                       ` (23 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith, Michael Wildt

From: Farah Smith <farah.smith@broadcom.com>

- Added TruFlow Table public API
- Added Table Scope capability including Table Type support code for
  setting and getting Table Types.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile          |   1 +
 drivers/net/bnxt/tf_core/hwrm_tf.h |  21 ++++++
 drivers/net/bnxt/tf_core/tf_core.c |   4 ++
 drivers/net/bnxt/tf_core/tf_core.h | 128 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  81 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  63 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c  |  43 +++++++++++++
 7 files changed, 341 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 02f8c3f..6714a6a 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -52,6 +52,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index a8a5547..acb9a8b 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -891,6 +891,27 @@ typedef struct tf_session_sram_resc_flush_input {
 } tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
 BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
 
+/* Input params for table type set */
+typedef struct tf_tbl_type_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+	/* Index to set */
+	uint32_t			 index;
+} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_set_input_t) <= TF_MAX_REQ_SIZE);
+
 /* Input params for table type get */
 typedef struct tf_tbl_type_get_input {
 	/* Session Id */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 39f4a11..f04a9b1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,6 +7,7 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -173,6 +174,9 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize external pool data structures */
+	tf_init_tbl_pool(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1431d06..4c90677 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -458,6 +458,134 @@ int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
 
 /**
+ * @page dram_table DRAM Table Scope Interface
+ *
+ * @ref tf_alloc_tbl_scope
+ *
+ * @ref tf_free_tbl_scope
+ *
+ * If we allocate the EEM memory from the core, we need to store it in
+ * the shared session data structure to make sure it can be freed later.
+ * (for example if the PF goes away)
+ *
+ * Current thought is that memory is allocated within core.
+ */
+
+
+/** tf_alloc_tbl_scope_parms definition
+ */
+struct tf_alloc_tbl_scope_parms {
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t rx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t rx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 * Use this variable or the number of flows, do not set both.
+	 */
+	uint32_t rx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000. If set, rx_mem_size_in_mb must equal 0.
+	 */
+	uint32_t rx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t rx_tbl_if_id;
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t tx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t tx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 */
+	uint32_t tx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000
+	 */
+	uint32_t tx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t tx_tbl_if_id;
+	/**
+	 * [out] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+struct tf_free_tbl_scope_parms {
+	/**
+	 * [in] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+/**
+ * allocate a table scope
+ *
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * is a software construct to identify an EEM table.  This function will
+ * divide the hash memory/buckets and records according to the device
+ * device constraints based upon calculations using either the number of flows
+ * requested or the size of memory indicated.  Other parameters passed in
+ * determine the configuration (maximum key size, maximum external action record
+ * size.
+ *
+ * This API will allocate the table region in
+ * DRAM, program the PTU page table entries, and program the number of static
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * 0 in the EM memory for the scope.  Upon successful completion of this API,
+ * hash tables are fully initialized and ready for entries to be inserted.
+ *
+ * A single API is used to allocate a common table scope identifier in both
+ * receive and transmit CFA. The scope identifier is common due to nature of
+ * connection tracking sending notifications between RX and TX direction.
+ *
+ * The receive and transmit table access identifiers specify which rings will
+ * be used to initialize table DRAM.  The application must ensure mutual
+ * exclusivity of ring usage for table scope allocation and any table update
+ * operations.
+ *
+ * The hash table buckets, EM keys, and EM lookup results are stored in the
+ * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
+ * hash table buckets are stored at the beginning of that memory.
+ *
+ * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * internally by TruFlow core.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms);
+
+
+/**
+ * free a table scope
+ *
+ * Firmware checks that the table scope ID is owned by the TruFlow
+ * session, verifies that no references to this table scope remains
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * then frees the table scope ID.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_scope(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms);
+
+/**
  * TCAM table type
  */
 enum tf_tcam_tbl_type {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index f4b2f4c..b9ed127 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,87 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_set_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_set_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.size = tfp_cpu_to_le_16(size);
+	req.index = tfp_cpu_to_le_32(index);
+
+	tfp_memcpy(&req.data,
+		   data,
+		   size);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_TBL_TYPE_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_get_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_input req = { 0 };
+	struct tf_tbl_type_get_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.index = tfp_cpu_to_le_32(index);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < size)
+		return -EINVAL;
+
+	tfp_memcpy(data,
+		   &resp.data,
+		   resp.size);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 #define TF_BYTES_PER_SLICE(tfp) 12
 #define NUM_SLICES(tfp, bytes) \
 	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fa74d78..9055b16 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,6 +6,7 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include "tf_tbl.h"
 #include "tf_rm.h"
 
 struct tf;
@@ -150,4 +151,66 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
 int tf_msg_tcam_entry_free(struct tf *tfp,
 			   struct tf_free_tcam_entry_parms *parms);
 
+/**
+ * Sends Set message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to set
+ *
+ * [in] type
+ *   Type of the object to set
+ *
+ * [in] size
+ *   Size of the data to set
+ *
+ * [in] data
+ *   Data to set
+ *
+ * [in] index
+ *   Index to set
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_set_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
+/**
+ * Sends get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to get
+ *
+ * [in] type
+ *   Type of the object to get
+ *
+ * [in] size
+ *   Size of the data read
+ *
+ * [in] data
+ *   Data read
+ *
+ * [in] index
+ *   Index to get
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_get_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
new file mode 100644
index 0000000..14bf4ef
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Truflow Table APIs and supporting code */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include "hsi_struct_def_dpdk.h"
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "hwrm_tf.h"
+#include "bnxt.h"
+#include "tf_resources.h"
+#include "tf_rm.h"
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+/* API defined in tf_tbl.h */
+void
+tf_init_tbl_pool(struct tf_session *session)
+{
+	enum tf_dir dir;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		session->ext_pool_2_scope[dir][TF_EXT_POOL_0] =
+			TF_TBL_SCOPE_INVALID;
+	}
+}
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 12/34] net/bnxt: add EM/EEM functionality
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (10 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
                       ` (22 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add TruFlow flow memory support
- Exact Match (EM) adds the capability to manage and manipulate
  data flows using on chip memory.
- Extended Exact Match (EEM) behaves similarly to EM, but at a
  vastly increased scale by using host DDR, with performance
  tradeoff due to the need to access off-chip memory.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |    2 +
 drivers/net/bnxt/tf_core/lookup3.h            |  162 +++
 drivers/net/bnxt/tf_core/stack.c              |  107 ++
 drivers/net/bnxt/tf_core/stack.h              |  107 ++
 drivers/net/bnxt/tf_core/tf_core.c            |   50 +
 drivers/net/bnxt/tf_core/tf_core.h            |  480 ++++++-
 drivers/net/bnxt/tf_core/tf_em.c              |  515 +++++++
 drivers/net/bnxt/tf_core/tf_em.h              |  117 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |  166 +++
 drivers/net/bnxt/tf_core/tf_msg.c             |  171 +++
 drivers/net/bnxt/tf_core/tf_msg.h             |   40 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1795 ++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   83 ++
 13 files changed, 3788 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 6714a6a..4c95847 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
new file mode 100644
index 0000000..e5abcc2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Based on lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ * http://www.burtleburtle.net/bob/c/lookup3.c
+ *
+ * These functions for producing 32-bit hashes for has table lookup.
+ * hashword(), hashlittle(), hashlittle2(), hashbig(), mix(), and final()
+ * are externally useful functions. Routines to test the hash are included
+ * if SELF_TEST is defined. You can use this free for any purpose. It is in
+ * the public domain. It has no warranty.
+ */
+
+#ifndef _LOOKUP3_H_
+#define _LOOKUP3_H_
+
+#define rot(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
+
+/** -------------------------------------------------------------------------
+ * This is reversible, so any information in (a,b,c) before mix() is
+ * still in (a,b,c) after mix().
+ *
+ * If four pairs of (a,b,c) inputs are run through mix(), or through
+ * mix() in reverse, there are at least 32 bits of the output that
+ * are sometimes the same for one pair and different for another pair.
+ * This was tested for:
+ *   pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+ * satisfy this are
+ *     4  6  8 16 19  4
+ *     9 15  3 18 27 15
+ *    14  9  3  7 17  3
+ * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
+ * for "differ" defined as + with a one-bit base and a two-bit delta.  I
+ * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+ * the operations, constants, and arrangements of the variables.
+ *
+ * This does not achieve avalanche.  There are input bits of (a,b,c)
+ * that fail to affect some output bits of (a,b,c), especially of a.  The
+ * most thoroughly mixed value is c, but it doesn't really even achieve
+ * avalanche in c.
+ *
+ * This allows some parallelism.  Read-after-writes are good at doubling
+ * the number of bits affected, so the goal of mixing pulls in the opposite
+ * direction as the goal of parallelism.  I did what I could.  Rotates
+ * seem to cost as much as shifts on every machine I could lay my hands
+ * on, and rotates are much kinder to the top and bottom bits, so I used
+ * rotates.
+ * --------------------------------------------------------------------------
+ */
+#define mix(a, b, c) \
+{ \
+	(a) -= (c); (a) ^= rot((c), 4);  (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 6);  (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 8);  (b) += a; \
+	(a) -= (c); (a) ^= rot((c), 16); (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 19); (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 4);  (b) += a; \
+}
+
+/** --------------------------------------------------------------------------
+ * final -- final mixing of 3 32-bit values (a,b,c) into c
+ *
+ * Pairs of (a,b,c) values differing in only a few bits will usually
+ * produce values of c that look totally different.  This was tested for
+ *  pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * These constants passed:
+ *  14 11 25 16 4 14 24
+ *  12 14 25 16 4 14 24
+ * and these came close:
+ *   4  8 15 26 3 22 24
+ *  10  8 15 26 3 22 24
+ *  11  8 15 26 3 22 24
+ * --------------------------------------------------------------------------
+ */
+#define final(a, b, c) \
+{ \
+	(c) ^= (b); (c) -= rot((b), 14); \
+	(a) ^= (c); (a) -= rot((c), 11); \
+	(b) ^= (a); (b) -= rot((a), 25); \
+	(c) ^= (b); (c) -= rot((b), 16); \
+	(a) ^= (c); (a) -= rot((c), 4);  \
+	(b) ^= (a); (b) -= rot((a), 14); \
+	(c) ^= (b); (c) -= rot((b), 24); \
+}
+
+/** --------------------------------------------------------------------
+ *  This works on all machines.  To be useful, it requires
+ *  -- that the key be an array of uint32_t's, and
+ *  -- that the length be the number of uint32_t's in the key
+
+ *  The function hashword() is identical to hashlittle() on little-endian
+ *  machines, and identical to hashbig() on big-endian machines,
+ *  except that the length has to be measured in uint32_ts rather than in
+ *  bytes. hashlittle() is more complicated than hashword() only because
+ *  hashlittle() has to dance around fitting the key bytes into registers.
+ *
+ *  Input Parameters:
+ *	 key: an array of uint32_t values
+ *	 length: the length of the key, in uint32_ts
+ *	 initval: the previous hash, or an arbitrary value
+ * --------------------------------------------------------------------
+ */
+static inline uint32_t hashword(const uint32_t *k,
+				size_t length,
+				uint32_t initval) {
+	uint32_t a, b, c;
+	int index = 12;
+
+	/* Set up the internal state */
+	a = 0xdeadbeef + (((uint32_t)length) << 2) + initval;
+	b = a;
+	c = a;
+
+	/*-------------------------------------------- handle most of the key */
+	while (length > 3) {
+		a += k[index];
+		b += k[index - 1];
+		c += k[index - 2];
+		mix(a, b, c);
+		length -= 3;
+		index -= 3;
+	}
+
+	/*-------------------------------------- handle the last 3 uint32_t's */
+	switch (length) {	      /* all the case statements fall through */
+	case 3:
+		c += k[index - 2];
+		/* Falls through. */
+	case 2:
+		b += k[index - 1];
+		/* Falls through. */
+	case 1:
+		a += k[index];
+		final(a, b, c);
+		/* Falls through. */
+	case 0:	    /* case 0: nothing left to add */
+		/* FALLTHROUGH */
+		break;
+	}
+	/*------------------------------------------------- report the result */
+	return c;
+}
+
+#endif /* _LOOKUP3_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
new file mode 100644
index 0000000..3337073
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <errno.h>
+#include "stack.h"
+
+#define STACK_EMPTY -1
+
+/* Initialize stack
+ */
+int
+stack_init(int num_entries, uint32_t *items, struct stack *st)
+{
+	if (items == NULL || st == NULL)
+		return -EINVAL;
+
+	st->max = num_entries;
+	st->top = STACK_EMPTY;
+	st->items = items;
+
+	return 0;
+}
+
+/* Return the size of the stack
+ */
+int32_t
+stack_size(struct stack *st)
+{
+	return st->top + 1;
+}
+
+/* Check if the stack is empty
+ */
+bool
+stack_is_empty(struct stack *st)
+{
+	return st->top == STACK_EMPTY;
+}
+
+/* Check if the stack is full
+ */
+bool
+stack_is_full(struct stack *st)
+{
+	return st->top == st->max - 1;
+}
+
+/* Add  element x to  the stack
+ */
+int
+stack_push(struct stack *st, uint32_t x)
+{
+	if (stack_is_full(st))
+		return -EOVERFLOW;
+
+	/* add an element and increments the top index
+	 */
+	st->items[++st->top] = x;
+
+	return 0;
+}
+
+/* Pop top element x from the stack and return
+ * in user provided location.
+ */
+int
+stack_pop(struct stack *st, uint32_t *x)
+{
+	if (stack_is_empty(st))
+		return -ENODATA;
+
+	*x = st->items[st->top];
+	st->top--;
+
+	return 0;
+}
+
+/* Dump the stack
+ */
+void stack_dump(struct stack *st)
+{
+	int i, j;
+
+	printf("top=%d\n", st->top);
+	printf("max=%d\n", st->max);
+
+	if (st->top == -1) {
+		printf("stack is empty\n");
+		return;
+	}
+
+	for (i = 0; i < st->max + 7 / 8; i++) {
+		printf("item[%d] 0x%08x", i, st->items[i]);
+
+		for (j = 0; j < 7; j++) {
+			if (i++ < st->max - 1)
+				printf(" 0x%08x", st->items[i]);
+		}
+		printf("\n");
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
new file mode 100644
index 0000000..6fe8829
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _STACK_H_
+#define _STACK_H_
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/** Stack data structure
+ */
+struct stack {
+	int max;         /**< Maximum number of entries */
+	int top;         /**< maximum value in stack */
+	uint32_t *items; /**< items in the stack */
+};
+
+/** Initialize stack of uint32_t elements
+ *
+ *  [in] num_entries
+ *    maximum number of elemnts in the stack
+ *
+ *  [in] items
+ *    pointer to items (must be sized to (uint32_t * num_entries)
+ *
+ *  s[in] st
+ *    pointer to the stack structure
+ *
+ *  return
+ *    0 for success
+ */
+int stack_init(int num_entries,
+	       uint32_t *items,
+	       struct stack *st);
+
+/** Return the size of the stack
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    number of elements
+ */
+int32_t stack_size(struct stack *st);
+
+/** Check if the stack is empty
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_empty(struct stack *st);
+
+/** Check if the stack is full
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_full(struct stack *st);
+
+/** Add  element x to  the stack
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in] x
+ *   value to push on the stack
+ * return
+ *  0 for success
+ */
+int stack_push(struct stack *st, uint32_t x);
+
+/** Pop top element x from the stack and return
+ * in user provided location.
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in, out] x
+ *  pointer to where the value popped will be written
+ *
+ * return
+ *  0 for success
+ */
+int stack_pop(struct stack *st, uint32_t *x);
+
+/** Dump stack information
+ *
+ * Warning: Don't use for large stacks due to prints
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *    none
+ */
+void stack_dump(struct stack *st);
+
+#endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index f04a9b1..fc7d638 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -8,6 +8,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
+#include "tf_em.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -289,6 +290,55 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb =
+		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
+				  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Process the EM entry per Table Scope type */
+	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
+				   tbl_scope_cb,
+				   parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb =
+		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
+				  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	return tf_delete_eem_entry(tfp, parms);
+}
+
 /** allocate identifier resource
  *
  * Returns success or failure code.
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 4c90677..34e643c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -21,6 +21,10 @@
 
 /********** BEGIN Truflow Core DEFINITIONS **********/
 
+
+#define TF_KILOBYTE  1024
+#define TF_MEGABYTE  (1024 * 1024)
+
 /**
  * direction
  */
@@ -31,6 +35,27 @@ enum tf_dir {
 };
 
 /**
+ * memory choice
+ */
+enum tf_mem {
+	TF_MEM_INTERNAL, /**< Internal */
+	TF_MEM_EXTERNAL, /**< External */
+	TF_MEM_MAX
+};
+
+/**
+ * The size of the external action record (Wh+/Brd2)
+ *
+ * Currently set to 512.
+ *
+ * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
+ * + stats (16) = 304 aligned on a 16B boundary
+ *
+ * Theoretically, the size should be smaller. ~304B
+ */
+#define TF_ACTION_RECORD_SZ 512
+
+/**
  * External pool size
  *
  * Defines a single pool of external action records of
@@ -56,6 +81,23 @@ enum tf_dir {
 #define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
 #define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
 
+/** EEM record AR helper
+ *
+ * Helpers to handle the Action Record Pointer in the EEM Record Entry.
+ *
+ * Convert absolute offset to action record pointer in EEM record entry
+ * Convert action record pointer in EEM record entry to absolute offset
+ */
+#define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
+#define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
+
+#define TF_ACT_REC_INDEX_2_OFFSET(idx) ((idx) << 9)
+
+/*
+ * Helper Macros
+ */
+#define TF_BITS_2_BYTES(num_bits) (((num_bits) + 7) / 8)
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
@@ -495,7 +537,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -517,7 +559,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -536,7 +578,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -546,7 +588,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -563,7 +605,7 @@ struct tf_free_tbl_scope_parms {
  * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
  * hash table buckets are stored at the beginning of that memory.
  *
- * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * NOTE:  No EM internal setup is done here. On chip EM records are managed
  * internally by TruFlow core.
  *
  * Returns success or failure code.
@@ -577,7 +619,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -905,4 +947,430 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_EXT_0,
 	TF_TBL_TYPE_MAX
 };
+
+/** tf_alloc_tbl_entry parameter definition
+ */
+struct tf_alloc_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/** allocate index table entries
+ *
+ * Internal types:
+ *
+ * Allocate an on chip index table entry or search for a matching
+ * entry of the indicated type for this TruFlow session.
+ *
+ * Allocates an index table record. This function will attempt to
+ * allocate an entry or search an index table for a matching entry if
+ * search is enabled (only the shadow copy of the table is accessed).
+ *
+ * If search is not enabled, the first available free entry is
+ * returned. If search is enabled and a matching entry to entry_data
+ * is found hit is set to TRUE and success is returned.
+ *
+ * External types:
+ *
+ * These are used to allocate inlined action record memory.
+ *
+ * Allocates an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_entry(struct tf *tfp,
+		       struct tf_alloc_tbl_entry_parms *parms);
+
+/** tf_free_tbl_entry parameter definition
+ */
+struct tf_free_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free index table entry
+ *
+ * Used to free a previously allocated table entry.
+ *
+ * Internal types:
+ *
+ * If session has shadow_copy enabled the shadow DB is searched and if
+ * found the element ref_cnt is decremented. If ref_cnt goes to
+ * zero then the element is returned to the session pool.
+ *
+ * If the session does not have a shadow DB the element is free'ed and
+ * given back to the session pool.
+ *
+ * External types:
+ *
+ * Free's an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_entry(struct tf *tfp,
+		      struct tf_free_tbl_entry_parms *parms);
+
+/** tf_set_tbl_entry parameter definition
+ */
+struct tf_set_tbl_entry_parms {
+	/**
+	 * [in] Table scope identifier
+	 *
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/** set index table entry
+ *
+ * Used to insert an application programmed index table entry into a
+ * previous allocated table location.  A shadow copy of the table
+ * is maintained (if enabled) (only for internal objects)
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tbl_entry(struct tf *tfp,
+		     struct tf_set_tbl_entry_parms *parms);
+
+/** tf_get_tbl_entry parameter definition
+ */
+struct tf_get_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/** get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_tbl_entry(struct tf *tfp,
+		     struct tf_get_tbl_entry_parms *parms);
+
+/**
+ * @page exact_match Exact Match Table
+ *
+ * @ref tf_insert_em_entry
+ *
+ * @ref tf_delete_em_entry
+ *
+ * @ref tf_search_em_entry
+ *
+ */
+/** tf_insert_em_entry parameter definition
+ */
+struct tf_insert_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] ptr to structure containing result field
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] duplicate check flag
+	 */
+	uint8_t	dup_check;
+	/**
+	 * [out] Flow handle value for the inserted entry.  This is encoded
+	 * as the entries[4]:bucket[2]:hashId[1]:hash[14]
+	 */
+	uint64_t flow_handle;
+	/**
+	 * [out] Flow id is returned as null (internal)
+	 * Flow id is the GFID value for the inserted entry (external)
+	 * This is the value written to the BD and useful information for mark.
+	 */
+	uint64_t flow_id;
+};
+/**
+ * tf_delete_em_entry parameter definition
+ */
+struct tf_delete_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] epoch group IDs of entry to delete
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] structure containing flow delete handle information
+	 */
+	uint64_t flow_handle;
+};
+/**
+ * tf_search_em_entry parameter definition
+ */
+struct tf_search_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in/out] ptr to structure containing EM record fields
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] epoch group IDs of entry to lookup
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] ptr to structure containing flow delete handle
+	 */
+	uint64_t flow_handle;
+};
+
+/** insert em hash entry in internal table memory
+ *
+ * Internal:
+ *
+ * This API inserts an exact match entry into internal EM table memory
+ * of the specified direction.
+ *
+ * Note: The EM record is managed within the TruFlow core and not the
+ * application.
+ *
+ * Shadow copy of internal record table an association with hash and 1,2, or 4
+ * associated buckets
+ *
+ * External:
+ * This API inserts an exact match entry into DRAM EM table memory of the
+ * specified direction and table scope.
+ *
+ * When inserting an entry into an exact match table, the TruFlow library may
+ * need to allocate a dynamic bucket for the entry (Brd4 only).
+ *
+ * The insertion of duplicate entries in an EM table is not permitted.	If a
+ * TruFlow application can guarantee that it will never insert duplicates, it
+ * can disable duplicate checking by passing a zero value in the  dup_check
+ * parameter to this API.  This will optimize performance. Otherwise, the
+ * TruFlow library will enforce protection against inserting duplicate entries.
+ *
+ * Flow handle is defined in this document:
+ *
+ * https://docs.google.com
+ * /document/d/1NESu7RpTN3jwxbokaPfYORQyChYRmJgs40wMIRe8_-Q/edit
+ *
+ * Returns success or busy code.
+ *
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
+
+/** delete em hash entry table memory
+ *
+ * Internal:
+ *
+ * This API deletes an exact match entry from internal EM table memory of the
+ * specified direction. If a valid flow ptr is passed in then that takes
+ * precedence over the pointer to the complete key passed in.
+ *
+ *
+ * External:
+ *
+ * This API deletes an exact match entry from EM table memory of the specified
+ * direction and table scope. If a valid flow handle is passed in then that
+ * takes precedence over the pointer to the complete key passed in.
+ *
+ * The TruFlow library may release a dynamic bucket when an entry is deleted.
+ *
+ *
+ * Returns success or not found code
+ *
+ *
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
+
+/** search em hash entry table memory
+ *
+ * Internal:
+
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow (flow takes precedence) and direction.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * If flow_handle is set, search shadow copy.
+ *
+ * Otherwise, query the fw with key to get result.
+ *
+ * External:
+ *
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow_handle (flow takes precedence), direction and table scope.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * Returns success or not found code
+ *
+ */
+int tf_search_em_entry(struct tf *tfp,
+		       struct tf_search_em_entry_parms *parms);
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
new file mode 100644
index 0000000..bd8e2ba
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -0,0 +1,515 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/* Enable EEM table dump
+ */
+#define TF_EEM_DUMP
+
+static struct tf_eem_64b_entry zero_key_entry;
+
+static uint32_t tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & 0x7FFF)
+		return 0;
+
+	if (num_entries > (128 * 1024 * 1024))
+		return 0;
+
+	return mask;
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+	for (l = (len - 1); l >= 0; l--)
+		crc = ucrc32(buf[l], crc);
+
+	return ~crc;
+}
+
+static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
+					  uint8_t *key,
+					  enum tf_dir dir)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = session->lkup_em_seed_mem[dir][index * 2];
+	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = crc32i(~val1, temp, 4);
+
+	val1 = crc32i(~val1,
+		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
+		      TF_HW_EM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
+					    uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((uint32_t *)in_key) + 1,
+			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 lookup3_init_value);
+
+	return val1;
+}
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	void *addr = NULL;
+	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
+
+	if (ctx == NULL)
+		return NULL;
+
+	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
+		return NULL;
+
+	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+		return NULL;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = ctx->em_tables[table_type].num_lvl - 1;
+
+	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Read Key table entry
+ *
+ * Entry is read in to entry
+ */
+static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
+	return 0;
+}
+
+static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
+
+	return 0;
+}
+
+static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_eem_64b_entry *entry,
+			       uint32_t index,
+			       enum tf_em_table_type table_type,
+			       enum tf_dir dir)
+{
+	int rc;
+	struct tf_eem_64b_entry table_entry;
+
+	rc = tf_em_read_entry(tbl_scope_cb,
+			      &table_entry,
+			      TF_EM_KEY_RECORD_SIZE,
+			      index,
+			      table_type,
+			      dir);
+
+	if (rc != 0)
+		return -EINVAL;
+
+	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
+		if (entry != NULL) {
+			if (memcmp(&table_entry,
+				   entry,
+				   TF_EM_KEY_RECORD_SIZE) == 0)
+				return -EEXIST;
+		} else {
+			return -EEXIST;
+		}
+
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
+				    uint8_t	       *in_key,
+				    struct tf_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
+
+/* tf_em_select_inject_table
+ *
+ * Returns:
+ * 0 - Key does not exist in either table and can be inserted
+ *		at "index" in table "table".
+ * EEXIST  - Key does exist in table at "index" in table "table".
+ * TF_ERR     - Something went horribly wrong.
+ */
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+					  enum tf_dir dir,
+					  struct tf_eem_64b_entry *entry,
+					  uint32_t key0_hash,
+					  uint32_t key1_hash,
+					  uint32_t *index,
+					  enum tf_em_table_type *table)
+{
+	int key0_entry;
+	int key1_entry;
+
+	/*
+	 * Check KEY0 table.
+	 */
+	key0_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key0_hash,
+					 KEY0_TABLE,
+					 dir);
+
+	/*
+	 * Check KEY1 table.
+	 */
+	key1_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key1_hash,
+					 KEY1_TABLE,
+					 dir);
+
+	if (key0_entry == -EEXIST) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return -EEXIST;
+	} else if (key1_entry == -EEXIST) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return -EEXIST;
+	} else if (key0_entry == 0) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return 0;
+	} else if (key1_entry == 0) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+int tf_insert_eem_entry(struct tf_session	   *session,
+			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t	   mask;
+	uint32_t	   key0_hash;
+	uint32_t	   key1_hash;
+	uint32_t	   key0_index;
+	uint32_t	   key1_index;
+	struct tf_eem_64b_entry key_entry;
+	uint32_t	   index;
+	enum tf_em_table_type table_type;
+	uint32_t	   gfid;
+	int		   num_of_entry;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+
+	key0_hash = tf_em_lkup_get_crc32_hash(session,
+				      &parms->key[num_of_entry] - 1,
+				      parms->dir);
+	key0_index = key0_hash & mask;
+
+	key1_hash =
+	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
+					parms->key);
+	key1_index = key1_hash & mask;
+
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Find which table to use
+	 */
+	if (tf_em_select_inject_table(tbl_scope_cb,
+				      parms->dir,
+				      &key_entry,
+				      key0_index,
+				      key1_index,
+				      &index,
+				      &table_type) == 0) {
+		if (table_type == KEY0_TABLE) {
+			TF_SET_GFID(gfid,
+				    key0_index,
+				    KEY0_TABLE);
+		} else {
+			TF_SET_GFID(gfid,
+				    key1_index,
+				    KEY1_TABLE);
+		}
+
+		/*
+		 * Inject
+		 */
+		if (tf_em_write_entry(tbl_scope_cb,
+				      &key_entry,
+				      TF_EM_KEY_RECORD_SIZE,
+				      index,
+				      table_type,
+				      parms->dir) == 0) {
+			TF_SET_FLOW_ID(parms->flow_id,
+				       gfid,
+				       TF_GFID_TABLE_EXTERNAL,
+				       parms->dir);
+			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+						     0,
+						     0,
+						     0,
+						     index,
+						     0,
+						     table_type);
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_session	   *session;
+	struct tf_tbl_scope_cb	   *tbl_scope_cb;
+	enum tf_em_table_type hash_type;
+	uint32_t index;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	if (session == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	if (tf_em_entry_exists(tbl_scope_cb,
+			       NULL,
+			       index,
+			       hash_type,
+			       parms->dir) == -EEXIST) {
+		tf_em_write_entry(tbl_scope_cb,
+				  &zero_key_entry,
+				  TF_EM_KEY_RECORD_SIZE,
+				  index,
+				  hash_type,
+				  parms->dir);
+
+		return 0;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
new file mode 100644
index 0000000..8a3584f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_H_
+#define _TF_EM_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+#define TF_HW_EM_KEY_MAX_SIZE 52
+#define TF_EM_KEY_RECORD_SIZE 64
+
+/** EEM Entry header
+ *
+ */
+struct tf_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define TF_LKUP_RECORD_VALID_SHIFT 31
+#define TF_LKUP_RECORD_VALID_MASK 0x80000000
+#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
+#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
+#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
+#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
+#define TF_LKUP_RECORD_RESERVED_SHIFT 17
+#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
+#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
+#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
+#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
+#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
+#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
+#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
+#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
+#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/** EEM Entry
+ *  Each EEM entry is 512-bit (64-bytes)
+ */
+struct tf_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+};
+
+/**
+ * Allocates EEM Table scope
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ *   -ENOMEM - Out of memory
+ */
+int tf_alloc_eem_tbl_scope(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free's EEM Table scope control block
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			     struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *  table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms);
+
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms);
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type);
+
+#endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
new file mode 100644
index 0000000..417a99c
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EXT_FLOW_HANDLE_H_
+#define _TF_EXT_FLOW_HANDLE_H_
+
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK	0x00000000F0000000ULL
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT	28
+#define TF_FLOW_TYPE_FLOW_HANDLE_MASK		0x00000000000000F0ULL
+#define TF_FLOW_TYPE_FLOW_HANDLE_SFT		4
+#define TF_FLAGS_FLOW_HANDLE_MASK		0x000000000000000FULL
+#define TF_FLAGS_FLOW_HANDLE_SFT		0
+#define TF_INDEX_FLOW_HANDLE_MASK		0xFFFFFFF000000000ULL
+#define TF_INDEX_FLOW_HANDLE_SFT		36
+#define TF_ENTRY_NUM_FLOW_HANDLE_MASK		0x0000000E00000000ULL
+#define TF_ENTRY_NUM_FLOW_HANDLE_SFT		33
+#define TF_HASH_TYPE_FLOW_HANDLE_MASK		0x0000000100000000ULL
+#define TF_HASH_TYPE_FLOW_HANDLE_SFT		32
+
+#define TF_FLOW_HANDLE_MASK (TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK |	\
+				TF_FLOW_TYPE_FLOW_HANDLE_MASK |		\
+				TF_FLAGS_FLOW_HANDLE_MASK |		\
+				TF_INDEX_FLOW_HANDLE_MASK |		\
+				TF_ENTRY_NUM_FLOW_HANDLE_MASK |		\
+				TF_HASH_TYPE_FLOW_HANDLE_MASK)
+
+#define TF_GET_FIELDS_FROM_FLOW_HANDLE(flow_handle,			\
+				       num_key_entries,			\
+				       flow_type,			\
+				       flags,				\
+				       index,				\
+				       entry_num,			\
+				       hash_type)			\
+do {									\
+	(num_key_entries) = \
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT);			\
+	(flow_type) = (((flow_handle) & TF_FLOW_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_FLOW_TYPE_FLOW_HANDLE_SFT);			\
+	(flags) = (((flow_handle) & TF_FLAGS_FLOW_HANDLE_MASK) >>	\
+		     TF_FLAGS_FLOW_HANDLE_SFT);				\
+	(index) = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>	\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+	(entry_num) = (((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT);			\
+	(hash_type) = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+#define TF_SET_FIELDS_IN_FLOW_HANDLE(flow_handle,			\
+				     num_key_entries,			\
+				     flow_type,				\
+				     flags,				\
+				     index,				\
+				     entry_num,				\
+				     hash_type)				\
+do {									\
+	(flow_handle) &= ~TF_FLOW_HANDLE_MASK;				\
+	(flow_handle) |= \
+		(((num_key_entries) << TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT) & \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flow_type) << TF_FLOW_TYPE_FLOW_HANDLE_SFT) & \
+			TF_FLOW_TYPE_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flags) << TF_FLAGS_FLOW_HANDLE_SFT) &	\
+			TF_FLAGS_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= ((((uint64_t)index) << TF_INDEX_FLOW_HANDLE_SFT) & \
+			TF_INDEX_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)entry_num) << TF_ENTRY_NUM_FLOW_HANDLE_SFT) & \
+		 TF_ENTRY_NUM_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)hash_type) << TF_HASH_TYPE_FLOW_HANDLE_SFT) & \
+		 TF_HASH_TYPE_FLOW_HANDLE_MASK);			\
+} while (0)
+#define TF_SET_FIELDS_IN_WH_FLOW_HANDLE TF_SET_FIELDS_IN_FLOW_HANDLE
+
+#define TF_GET_INDEX_FROM_FLOW_HANDLE(flow_handle,			\
+				      index)				\
+do {									\
+	index = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>		\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(flow_handle,			\
+					  hash_type)			\
+do {									\
+	hash_type = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >>	\
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+/*
+ * 32 bit Flow ID handlers
+ */
+#define TF_GFID_FLOW_ID_MASK		0xFFFFFFF0UL
+#define TF_GFID_FLOW_ID_SFT		4
+#define TF_FLAG_FLOW_ID_MASK		0x00000002UL
+#define TF_FLAG_FLOW_ID_SFT		1
+#define TF_DIR_FLOW_ID_MASK		0x00000001UL
+#define TF_DIR_FLOW_ID_SFT		0
+
+#define TF_SET_FLOW_ID(flow_id, gfid, flag, dir)			\
+do {									\
+	(flow_id) &= ~(TF_GFID_FLOW_ID_MASK |				\
+		     TF_FLAG_FLOW_ID_MASK |				\
+		     TF_DIR_FLOW_ID_MASK);				\
+	(flow_id) |= (((gfid) << TF_GFID_FLOW_ID_SFT) &			\
+		    TF_GFID_FLOW_ID_MASK) |				\
+		(((flag) << TF_FLAG_FLOW_ID_SFT) &			\
+		 TF_FLAG_FLOW_ID_MASK) |				\
+		(((dir) << TF_DIR_FLOW_ID_SFT) &			\
+		 TF_DIR_FLOW_ID_MASK);					\
+} while (0)
+
+#define TF_GET_GFID_FROM_FLOW_ID(flow_id, gfid)				\
+do {									\
+	gfid = (((flow_id) & TF_GFID_FLOW_ID_MASK) >>			\
+		TF_GFID_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_DIR_FROM_FLOW_ID(flow_id, dir)				\
+do {									\
+	dir = (((flow_id) & TF_DIR_FLOW_ID_MASK) >>			\
+		TF_DIR_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_FLAG_FROM_FLOW_ID(flow_id, flag)				\
+do {									\
+	flag = (((flow_id) & TF_FLAG_FLOW_ID_MASK) >>			\
+		TF_FLAG_FLOW_ID_SFT);					\
+} while (0)
+
+/*
+ * 32 bit GFID handlers
+ */
+#define TF_HASH_INDEX_GFID_MASK	0x07FFFFFFUL
+#define TF_HASH_INDEX_GFID_SFT	0
+#define TF_HASH_TYPE_GFID_MASK	0x08000000UL
+#define TF_HASH_TYPE_GFID_SFT	27
+
+#define TF_GFID_TABLE_INTERNAL 0
+#define TF_GFID_TABLE_EXTERNAL 1
+
+#define TF_SET_GFID(gfid, index, type)					\
+do {									\
+	gfid = (((index) << TF_HASH_INDEX_GFID_SFT) &			\
+		TF_HASH_INDEX_GFID_MASK) |				\
+		(((type) << TF_HASH_TYPE_GFID_SFT) &			\
+		 TF_HASH_TYPE_GFID_MASK);				\
+} while (0)
+
+#define TF_GET_HASH_INDEX_FROM_GFID(gfid, index)			\
+do {									\
+	index = (((gfid) & TF_HASH_INDEX_GFID_MASK) >>			\
+		TF_HASH_INDEX_GFID_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_GFID(gfid, type)				\
+do {									\
+	type = (((gfid) & TF_HASH_TYPE_GFID_MASK) >>			\
+		TF_HASH_TYPE_GFID_SFT);					\
+} while (0)
+
+
+#endif /* _TF_EXT_FLOW_HANDLE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b9ed127..c507ec7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,177 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*ctx_id = tfp_le_to_cpu_16(resp.ctx_id);
+
+	return rc;
+}
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t  *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
+	struct hwrm_tf_ctxt_mem_unrgtr_output resp = {0};
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.ctx_id = tfp_cpu_to_le_32(*ctx_id);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_UNRGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps)
+{
+	int rc;
+	struct hwrm_tf_ext_em_qcaps_input  req = {0};
+	struct hwrm_tf_ext_em_qcaps_output resp = { 0 };
+	uint32_t             flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+
+	parms.tf_type = HWRM_TF_EXT_EM_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_caps->supported = tfp_le_to_cpu_32(resp.supported);
+	em_caps->max_entries_supported =
+		tfp_le_to_cpu_32(resp.max_entries_supported);
+	em_caps->key_entry_size = tfp_le_to_cpu_16(resp.key_entry_size);
+	em_caps->record_entry_size =
+		tfp_le_to_cpu_16(resp.record_entry_size);
+	em_caps->efc_entry_size = tfp_le_to_cpu_16(resp.efc_entry_size);
+
+	return rc;
+}
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t   num_entries,
+		  uint16_t   key0_ctx_id,
+		  uint16_t   key1_ctx_id,
+		  uint16_t   record_ctx_id,
+		  uint16_t   efc_ctx_id,
+		  int        dir)
+{
+	int rc;
+	struct hwrm_tf_ext_em_cfg_input  req = {0};
+	struct hwrm_tf_ext_em_cfg_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	flags |= HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD;
+
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
+
+	req.key0_ctx_id = tfp_cpu_to_le_16(key0_ctx_id);
+	req.key1_ctx_id = tfp_cpu_to_le_16(key1_ctx_id);
+	req.record_ctx_id = tfp_cpu_to_le_16(record_ctx_id);
+	req.efc_ctx_id = tfp_cpu_to_le_16(efc_ctx_id);
+
+	parms.tf_type = HWRM_TF_EXT_EM_CFG;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op)
+{
+	int rc;
+	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
+
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 9055b16..b8d8c1e 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -122,6 +122,46 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id);
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t     *ctx_id);
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps);
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t      num_entries,
+		  uint16_t      key0_ctx_id,
+		  uint16_t      key1_ctx_id,
+		  uint16_t      record_ctx_id,
+		  uint16_t      efc_ctx_id,
+		  int           dir);
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op);
+
+/**
  * Sends tcam entry 'set' to the Firmware.
  *
  * [in] tfp
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 14bf4ef..d4705bf 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,7 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
-#include "tf_session.h"
+#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -30,6 +30,1366 @@
 /* Number of pointers per page_size */
 #define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define	TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			PMD_DRV_LOG(WARNING,
+				    "No map for page %d table %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		PMD_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uint64_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			PMD_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d\n",
+				i);
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			PMD_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(uint32_t *)tp->pg_va_tbl[j],
+				(uint32_t *)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct tf_em_page_tbl *tp,
+		      struct tf_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == PT_LVL_0) {
+		page_cnt[PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == PT_LVL_1) {
+		page_cnt[PT_LVL_1] = num_data_pages;
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else if (max_lvl == PT_LVL_2) {
+		page_cnt[PT_LVL_2] = num_data_pages;
+		page_cnt[PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct tf_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		PMD_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type,
+			    (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	PMD_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[PT_LVL_0],
+		    page_cnt[PT_LVL_1],
+		    page_cnt[PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
+		0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
+		0;
+
+	return 0;
+}
+
+/**
+ * Internal function to set a Table Entry. Supports all internal Table Types
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_set_tbl_entry_internal(struct tf *tfp,
+			  struct tf_set_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Set failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Get failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+#if (TF_SHADOW == 1)
+/**
+ * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is incremente and
+ * returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count incremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
+			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Alloc with search not supported\n",
+		    parms->dir);
+
+
+	return -EOPNOTSUPP;
+}
+
+/**
+ * Free Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is decremente and
+ * new ref_count returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_free_tbl_entry_shadow(struct tf_session *tfs,
+			 struct tf_free_tbl_entry_parms *parms)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Free with search not supported\n",
+		    parms->dir);
+
+	return -EOPNOTSUPP;
+}
+#endif /* TF_SHADOW */
+
+/**
+ * Create External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ * [in] tbl_scope_id
+ *   id of the table scope
+ * [in] num_entries
+ *   number of entries to write
+ * [in] entry_sz_bytes
+ *   size of each entry
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t table_scope_id,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_pool[dir][TF_EXT_POOL_0];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
+			    dir, strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0] =
+		(uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries * entry_sz_bytes - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+				    dir, strerror(-rc));
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = table_scope_id;
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ *
+ */
+static void
+tf_destroy_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_pool_mem =
+		tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0];
+
+	tfp_free(ext_pool_mem);
+
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = TF_TBL_SCOPE_INVALID;
+}
+
+/**
+ * Allocate External Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_external(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_session *tfs;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope not allocated\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d\n",
+		   parms->dir,
+		   parms->type);
+		return rc;
+	}
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Allocate Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	int free_cnt;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	id = ba_alloc(session_pool);
+	if (id == -1) {
+		free_cnt = ba_free_count(session_pool);
+
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d, free:%d\n",
+		   parms->dir,
+		   parms->type,
+		   free_cnt);
+		return -ENOMEM;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_ADD_BASE,
+			    id,
+			    &index);
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the session pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+static int
+tf_free_tbl_entry_pool_external(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	uint32_t index;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope error\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
+/**
+ * Free Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_free_tbl_entry_pool_internal(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	int id;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	uint32_t index;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Check if element was indeed allocated */
+	id = ba_inuse_free(session_pool, index);
+	if (id == -1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -ENOMEM;
+	}
+
+	return rc;
+}
+
 /* API defined in tf_tbl.h */
 void
 tf_init_tbl_pool(struct tf_session *session)
@@ -41,3 +1401,436 @@ tf_init_tbl_pool(struct tf_session *session)
 			TF_TBL_SCOPE_INVALID;
 	}
 }
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+
+	/* Check that id is valid */
+	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
+	if (i < 0)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			 struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(session,
+					     dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_em.h */
+int
+tf_alloc_eem_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_table *em_tables;
+	int index;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+
+	/* check parameters */
+	if (parms == NULL || tfp->session == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	index = ba_alloc(session->tbl_scope_pool_rx);
+	if (index == -1) {
+		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+			    "Control Block\n");
+		return -ENOMEM;
+	}
+
+	tbl_scope_cb = &session->tbl_scopes[index];
+	tbl_scope_cb->index = index;
+	tbl_scope_cb->tbl_scope_id = index;
+	parms->tbl_scope_id = index;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"EEM: Unable to query for EEM capability\n");
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx\n");
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[KEY0_TABLE].num_entries,
+				   em_tables[KEY0_TABLE].ctx_id,
+				   em_tables[KEY1_TABLE].ctx_id,
+				   em_tables[RECORD_TABLE].ctx_id,
+				   em_tables[EFC_TABLE].ctx_id,
+				   dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"TBL: Unable to configure EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(session,
+						 dir,
+						 tbl_scope_cb,
+						 index,
+						 TF_EXT_POOL_ENTRY_CNT,
+						 TF_EXT_POOL_ENTRY_SZ_BYTES);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "%d TBL: Unable to allocate idx pools %s\n",
+				    dir,
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = index;
+	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+	return -EINVAL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	if (tfp == NULL || parms == NULL || parms->data == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		void *base_addr;
+		uint32_t offset = TF_ACT_REC_INDEX_2_OFFSET(parms->idx);
+		uint32_t tbl_scope_id;
+
+		session = (struct tf_session *)(tfp->session->core_data);
+
+		tbl_scope_id =
+			session->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+
+		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Table scope not allocated\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		/* Get the table scope control block associated with the
+		 * external pool
+		 */
+
+		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
+
+		if (tbl_scope_cb == NULL)
+			return -EINVAL;
+
+		/* External table, implicitly the Action table */
+		base_addr = tf_em_get_table_page(tbl_scope_cb,
+						 parms->dir,
+						 offset,
+						 RECORD_TABLE);
+		if (base_addr == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Base address lookup failed\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		offset %= TF_EM_PAGE_SIZE;
+		rte_memcpy((char *)base_addr + offset,
+			   parms->data,
+			   parms->data_sz_in_bytes);
+	} else {
+		/* Internal table type processing */
+		rc = tf_set_tbl_entry_internal(tfp, parms);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Set failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+		}
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, External table type not supported\n",
+			    parms->dir);
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_tbl_entry_internal(tfp, parms);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Get failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_eem_tbl_scope(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	/* free table scope and all associated resources */
+	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow copy support for external tables, allocate and return
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
+		/* Entry found and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow of external tables so just free the entry
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_free_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_free_tbl_entry_shadow(tfs, parms);
+		/* Entry free'ed and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
+
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 5a5e72f..cb7ce9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,6 +7,7 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+#include "stack.h"
 
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
@@ -15,6 +16,48 @@ enum tf_pg_tbl_lvl {
 	PT_LVL_MAX
 };
 
+enum tf_em_table_type {
+	KEY0_TABLE,
+	KEY1_TABLE,
+	RECORD_TABLE,
+	EFC_TABLE,
+	MAX_TABLE
+};
+
+struct tf_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct tf_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+};
+
+struct tf_em_ctx_mem_info {
+	struct tf_em_table		em_tables[MAX_TABLE];
+};
+
+/** table scope control block content */
+struct tf_em_caps {
+	uint32_t flags;
+	uint32_t supported;
+	uint32_t max_entries_supported;
+	uint16_t key_entry_size;
+	uint16_t record_entry_size;
+	uint16_t efc_entry_size;
+};
+
 /** Invalid table scope id */
 #define TF_TBL_SCOPE_INVALID 0xffffffff
 
@@ -27,9 +70,49 @@ enum tf_pg_tbl_lvl {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
+	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps          em_caps[TF_DIR_MAX];
+	struct stack               ext_pool[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
+ */
+#define PAGE_SHIFT 22 /** 2M */
+
+#if (PAGE_SHIFT < 12)				/** < 4K >> 4K */
+#define TF_EM_PAGE_SHIFT 12
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SHIFT <= 13)			/** 4K, 8K */
+#define TF_EM_PAGE_SHIFT 13
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
+#define TF_EM_PAGE_SHIFT 15
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
+#elif (PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
+#define TF_EM_PAGE_SHIFT 16
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
+#define TF_EM_PAGE_SHIFT 18
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SHIFT <= 21)			/** 1M */
+#define TF_EM_PAGE_SHIFT 20
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SHIFT <= 22)			/** 2M, 4M */
+#define TF_EM_PAGE_SHIFT 21
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
+#define TF_EM_PAGE_SHIFT 22
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#else						/** >= 1G >> 1G */
+#define TF_EM_PAGE_SHIFT	30
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /**
  * Initialize table pool structure to indicate
  * no table scope has been associated with the
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 13/34] net/bnxt: fetch SVIF information from the firmware
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (11 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
                       ` (21 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

SVIF (source virtual interface) is used to represent a physical port,
physical function, or a virtual function. SVIF is compared during L2
context and exact match lookups in TX direction. SVIF is masked for
port information during L2 context and exact match lookup in RX direction.
Hence, driver needs this SVIF information to program L2 context and Exact
match tables.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  6 ++++++
 drivers/net/bnxt/bnxt_ethdev.c | 14 ++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 34 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 55 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index a8e57ca..2ed56f4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -682,6 +682,9 @@ struct bnxt {
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
 
+#define	BNXT_SVIF_INVALID	0xFFFF
+	uint16_t		func_svif;
+	uint16_t		port_svif;
 	struct tf               tfp;
 };
 
@@ -723,4 +726,7 @@ extern int bnxt_logtype_driver;
 
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
+
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
+
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 93d0062..f3cc745 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4696,6 +4696,18 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 	ALLOW_FUNC(HWRM_VNIC_TPA_CFG);
 }
 
+uint16_t
+bnxt_get_svif(uint16_t port_id, bool func_svif)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	bp = eth_dev->data->dev_private;
+
+	return func_svif ? bp->func_svif : bp->port_svif;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
@@ -4731,6 +4743,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 	if (rc)
 		return rc;
 
+	bnxt_hwrm_port_mac_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 443553b..0eaf917 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3010,6 +3010,8 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t flags;
 	int rc = 0;
+	bp->func_svif = BNXT_SVIF_INVALID;
+	uint16_t svif_info;
 
 	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
@@ -3020,6 +3022,12 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	/* Hard Coded.. 0xfff VLAN ID mask */
 	bp->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff;
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
+		bp->func_svif =	svif_info &
+				     HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
 	flags = rte_le_to_cpu_16(resp->flags);
 	if (BNXT_PF(bp) && (flags & HWRM_FUNC_QCFG_OUTPUT_FLAGS_MULTI_HOST))
 		bp->flags |= BNXT_FLAG_MULTI_HOST;
@@ -3056,6 +3064,32 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
+{
+	struct hwrm_port_mac_qcfg_input req = {0};
+	struct hwrm_port_mac_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t port_svif_info;
+	int rc;
+
+	bp->port_svif = BNXT_SVIF_INVALID;
+
+	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
+	if (port_svif_info &
+	    HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID)
+		bp->port_svif = port_svif_info &
+			HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static void copy_func_cfg_to_qcaps(struct hwrm_func_cfg_input *fcfg,
 				   struct hwrm_func_qcaps_output *qcaps)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index df7aa74..0079d8a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -193,6 +193,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp);
 int bnxt_hwrm_port_clr_stats(struct bnxt *bp);
 int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on);
 int bnxt_hwrm_port_led_qcaps(struct bnxt *bp);
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp);
 int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 					uint32_t flags);
 void vf_vnic_set_rxmask_cb(struct bnxt_vnic_info *vnic, void *flagp);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 14/34] net/bnxt: fetch vnic info from DPDK port
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (12 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
                       ` (20 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

VNIC is needed for the driver to program the action record for rx
flows. VNIC determines what receive rings to use to place the received
packets. This patch introduces a routine that will convert a given
dpdk port to VNIC.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 15 +++++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2ed56f4..c4507f7 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -727,6 +727,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f3cc745..57ed90f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4708,6 +4708,21 @@ bnxt_get_svif(uint16_t port_id, bool func_svif)
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
+uint16_t
+bnxt_get_vnic_id(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt_vnic_info *vnic;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	bp = eth_dev->data->dev_private;
+
+	vnic = BNXT_GET_DEFAULT_VNIC(bp);
+
+	return vnic->fw_vnic_id;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (13 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
                       ` (19 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This feature can be enabled by passing
"-w 0000:0d:00.0,host-based-truflow=1” to the DPDK application.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  4 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 73 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index c4507f7..cd84ebd 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -685,7 +685,9 @@ struct bnxt {
 #define	BNXT_SVIF_INVALID	0xFFFF
 	uint16_t		func_svif;
 	uint16_t		port_svif;
-	struct tf               tfp;
+
+	struct tf		tfp;
+	uint8_t			truflow;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 57ed90f..c4bbf1d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_malloc.h>
 #include <rte_cycles.h>
 #include <rte_alarm.h>
+#include <rte_kvargs.h>
 
 #include "bnxt.h"
 #include "bnxt_filter.h"
@@ -126,6 +127,18 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
+static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_TRUFLOW,
+	NULL
+};
+
+/*
+ * truflow == false to disable the feature
+ * truflow == true to enable the feature
+ */
+#define	BNXT_DEVARG_TRUFLOW_INVALID(truflow)	((truflow) > 1)
+
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
 static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -4854,6 +4867,63 @@ static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 }
 
 static int
+bnxt_parse_devarg_truflow(__rte_unused const char *key,
+			  const char *value, void *opaque_arg)
+{
+	struct bnxt *bp = opaque_arg;
+	unsigned long truflow;
+	char *end = NULL;
+
+	if (!value || !opaque_arg) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid parameter passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	truflow = strtoul(value, &end, 10);
+	if (end == NULL || *end != '\0' ||
+	    (truflow == ULONG_MAX && errno == ERANGE)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid parameter passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	if (BNXT_DEVARG_TRUFLOW_INVALID(truflow)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid value passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	bp->truflow = truflow;
+	if (bp->truflow)
+		PMD_DRV_LOG(INFO, "Host-based truflow feature enabled.\n");
+
+	return 0;
+}
+
+static void
+bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+
+	if (devargs == NULL)
+		return;
+
+	kvlist = rte_kvargs_parse(devargs->args, bnxt_dev_args);
+	if (kvlist == NULL)
+		return;
+
+	/*
+	 * Handler for "truflow" devarg.
+	 * Invoked as for ex: "-w 0000:00:0d.0,host-based-truflow=1”
+	 */
+	rte_kvargs_process(kvlist, BNXT_DEVARG_TRUFLOW,
+			   bnxt_parse_devarg_truflow, bp);
+
+	rte_kvargs_free(kvlist);
+}
+
+static int
 bnxt_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -4879,6 +4949,9 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 
 	bp = eth_dev->data->dev_private;
 
+	/* Parse dev arguments passed on when starting the DPDK application. */
+	bnxt_parse_dev_args(bp, pci_dev->device.devargs);
+
 	bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE;
 
 	if (bnxt_vf_pciid(pci_dev->id.device_id))
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 16/34] net/bnxt: add support for ULP session manager init
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (14 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
                       ` (18 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   6 +-
 drivers/net/bnxt/bnxt.h                       |   5 +
 drivers/net/bnxt/bnxt_ethdev.c                |   4 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 527 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            | 100 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 187 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  77 ++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  94 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h        |  49 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  27 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  35 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  40 ++
 13 files changed, 1185 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 4c95847..bb9b888 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -44,7 +44,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core -I$(SRCDIR)/tf_ulp
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
@@ -57,6 +57,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index cd84ebd..cd20740 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -22,6 +22,7 @@
 #include "bnxt_util.h"
 
 #include "tf_core.h"
+#include "bnxt_ulp.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -687,6 +688,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_ulp_context	ulp_ctx;
 	uint8_t			truflow;
 };
 
@@ -729,6 +731,9 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+int32_t bnxt_ulp_init(struct bnxt *bp);
+void bnxt_ulp_deinit(struct bnxt *bp);
+
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c4bbf1d..1703ce3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -904,6 +904,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 	pthread_mutex_lock(&bp->def_cp_lock);
 	bnxt_schedule_fw_health_check(bp);
 	pthread_mutex_unlock(&bp->def_cp_lock);
+
+	if (bp->truflow)
+		bnxt_ulp_init(bp);
+
 	return 0;
 
 error:
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
new file mode 100644
index 0000000..3516df4
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_TF_COMMON_H_
+#define _BNXT_TF_COMMON_H_
+
+#define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
+
+#define BNXT_ULP_EM_FLOWS			8192
+#define BNXT_ULP_1M_FLOWS			1000000
+#define BNXT_EEM_RX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_TX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_HASH_KEY2_USED			0x8000000
+#define BNXT_EEM_RX_HW_HASH_KEY2_BIT		BNXT_ULP_1M_FLOWS
+#define	BNXT_ULP_DFLT_RX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_RX_MEM			0
+#define	BNXT_ULP_RX_NUM_FLOWS			32
+#define	BNXT_ULP_RX_TBL_IF_ID			0
+#define	BNXT_ULP_DFLT_TX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_TX_MEM			0
+#define	BNXT_ULP_TX_NUM_FLOWS			32
+#define	BNXT_ULP_TX_TBL_IF_ID			0
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl);
+
+#endif /* _BNXT_TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
new file mode 100644
index 0000000..7afc6bf
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -0,0 +1,527 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
+
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_ext_flow_handle.h"
+
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+
+/* Linked list of all TF sessions. */
+STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
+			STAILQ_HEAD_INITIALIZER(bnxt_ulp_session_list);
+
+/* Mutex to synchronize bnxt_ulp_session_list operations. */
+static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/*
+ * Initialize an ULP session.
+ * An ULP session will contain all the resources needed to support rte flow
+ * offloads. A session is initialized as part of rte_eth_device start.
+ * A single vswitch instance can have multiple uplinks which means
+ * rte_eth_device start will be called for each of these devices.
+ * ULP session manager will make sure that a single ULP session is only
+ * initialized once. Apart from this, it also initializes MARK database,
+ * EEM table & flow database. ULP session manager also manages a list of
+ * all opened ULP sessions.
+ */
+static int32_t
+ulp_ctx_session_open(struct bnxt *bp,
+		     struct bnxt_ulp_session_state *session)
+{
+	struct rte_eth_dev		*ethdev = bp->eth_dev;
+	int32_t				rc = 0;
+	struct tf_open_session_parms	params;
+
+	memset(&params, 0, sizeof(params));
+
+	rc = rte_eth_dev_get_name_by_port(ethdev->data->port_id,
+					  params.ctrl_chan_name);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Invalid port %d, rc = %d\n",
+			    ethdev->data->port_id, rc);
+		return rc;
+	}
+
+	rc = tf_open_session(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
+			    params.ctrl_chan_name, rc);
+		return -EINVAL;
+	}
+	session->session_opened = 1;
+	session->g_tfp = &bp->tfp;
+	return rc;
+}
+
+static void
+bnxt_init_tbl_scope_parms(struct bnxt *bp,
+			  struct tf_alloc_tbl_scope_parms *params)
+{
+	struct bnxt_ulp_device_params	*dparms;
+	uint32_t dev_id;
+	int rc;
+
+	rc = bnxt_ulp_cntxt_dev_id_get(&bp->ulp_ctx, &dev_id);
+	if (rc)
+		/* TBD: For now, just use default. */
+		dparms = 0;
+	else
+		dparms = bnxt_ulp_device_params_get(dev_id);
+
+	if (!dparms) {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = BNXT_ULP_RX_NUM_FLOWS;
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = BNXT_ULP_TX_NUM_FLOWS;
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	} else {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = dparms->num_flows / (1024);
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = dparms->num_flows / (1024);
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	}
+}
+
+/* Initialize Extended Exact Match host memory. */
+static int32_t
+ulp_eem_tbl_scope_init(struct bnxt *bp)
+{
+	struct tf_alloc_tbl_scope_parms params = {0};
+	int rc;
+
+	bnxt_init_tbl_scope_parms(bp, &params);
+
+	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate eem table scope rc = %d\n",
+			    rc);
+		return rc;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_set(&bp->ulp_ctx, params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to set table scope id\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/* The function to free and deinit the ulp context data. */
+static int32_t
+ulp_ctx_deinit(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Free the contents */
+	if (session->cfg_data) {
+		rte_free(session->cfg_data);
+		bp->ulp_ctx.cfg_data = NULL;
+		session->cfg_data = NULL;
+	}
+	return 0;
+}
+
+/* The function to allocate and initialize the ulp context data. */
+static int32_t
+ulp_ctx_init(struct bnxt *bp,
+	     struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_data	*ulp_data;
+	int32_t			rc = 0;
+
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Allocate memory to hold ulp context data. */
+	ulp_data = rte_zmalloc("bnxt_ulp_data",
+			       sizeof(struct bnxt_ulp_data), 0);
+	if (!ulp_data) {
+		BNXT_TF_DBG(ERR, "Failed to allocate memory for ulp data\n");
+		return -ENOMEM;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	bp->ulp_ctx.cfg_data = ulp_data;
+	session->cfg_data = ulp_data;
+	ulp_data->ref_cnt++;
+
+	/* Open the ulp session. */
+	rc = ulp_ctx_session_open(bp, session);
+	if (rc) {
+		(void)ulp_ctx_deinit(bp, session);
+		return rc;
+	}
+	bnxt_ulp_cntxt_tfp_set(&bp->ulp_ctx, session->g_tfp);
+	return rc;
+}
+
+static int32_t
+ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!ulp_ctx || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	ulp_ctx->cfg_data = session->cfg_data;
+	ulp_ctx->cfg_data->ref_cnt++;
+
+	/* TBD call TF_session_attach. */
+	ulp_ctx->g_tfp = session->g_tfp;
+	return 0;
+}
+
+/*
+ * Initialize the state of an ULP session.
+ * If the state of an ULP session is not initialized, set it's state to
+ * initialized. If the state is already initialized, do nothing.
+ */
+static void
+ulp_context_initialized(struct bnxt_ulp_session_state *session, bool *init)
+{
+	pthread_mutex_lock(&session->bnxt_ulp_mutex);
+
+	if (!session->bnxt_ulp_init) {
+		session->bnxt_ulp_init = true;
+		*init = false;
+	} else {
+		*init = true;
+	}
+
+	pthread_mutex_unlock(&session->bnxt_ulp_mutex);
+}
+
+/*
+ * Check if an ULP session is already allocated for a specific PCI
+ * domain & bus. If it is already allocated simply return the session
+ * pointer, otherwise allocate a new session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_get_session(struct rte_pci_addr *pci_addr)
+{
+	struct bnxt_ulp_session_state *session;
+
+	STAILQ_FOREACH(session, &bnxt_ulp_session_list, next) {
+		if (session->pci_info.domain == pci_addr->domain &&
+		    session->pci_info.bus == pci_addr->bus) {
+			return session;
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Allocate and Initialize an ULP session and set it's state to INITIALIZED.
+ * If it's already initialized simply return the already existing session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_session_init(struct bnxt *bp,
+		 bool *init)
+{
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+	struct bnxt_ulp_session_state	*session;
+
+	if (!bp)
+		return NULL;
+
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+
+	session = ulp_get_session(pci_addr);
+	if (!session) {
+		/* Not Found the session  Allocate a new one */
+		session = rte_zmalloc("bnxt_ulp_session",
+				      sizeof(struct bnxt_ulp_session_state),
+				      0);
+		if (!session) {
+			BNXT_TF_DBG(ERR,
+				    "Allocation failed for bnxt_ulp_session\n");
+			pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+			return NULL;
+
+		} else {
+			/* Add it to the queue */
+			session->pci_info.domain = pci_addr->domain;
+			session->pci_info.bus = pci_addr->bus;
+			pthread_mutex_init(&session->bnxt_ulp_mutex, NULL);
+			STAILQ_INSERT_TAIL(&bnxt_ulp_session_list,
+					   session, next);
+		}
+	}
+	ulp_context_initialized(session, init);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	return session;
+}
+
+/*
+ * When a port is initialized by dpdk. This functions is called
+ * and this function initializes the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+int32_t
+bnxt_ulp_init(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state *session;
+	bool init;
+	int rc;
+
+	/*
+	 * Multiple uplink ports can be associated with a single vswitch.
+	 * Make sure only the port that is started first will initialize
+	 * the TF session.
+	 */
+	session = ulp_session_init(bp, &init);
+	if (!session) {
+		BNXT_TF_DBG(ERR, "Failed to initialize the tf session\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * If ULP is already initialized for a specific domain then simply
+	 * assign the ulp context to this rte_eth_dev.
+	 */
+	if (init) {
+		rc = ulp_ctx_attach(&bp->ulp_ctx, session);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Failed to attach the ulp context\n");
+		}
+		return rc;
+	}
+
+	/* Allocate and Initialize the ulp context. */
+	rc = ulp_ctx_init(bp, session);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the ulp context\n");
+		goto jump_to_error;
+	}
+
+	/* Create the Mark database. */
+	rc = ulp_mark_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the mark database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the flow database. */
+	rc = ulp_flow_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the flow database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the eem table scope. */
+	rc = ulp_eem_tbl_scope_init(bp);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the eem scope table\n");
+		goto jump_to_error;
+	}
+
+	return rc;
+
+jump_to_error:
+	return -ENOMEM;
+}
+
+/* Below are the access functions to access internal data of ulp context. */
+
+/* Function to set the Mark DB into the context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->mark_tbl = mark_tbl;
+
+	return 0;
+}
+
+/* Function to retrieve the Mark DB from the context. */
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->mark_tbl;
+}
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->dev_id = dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t *dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*dev_id = ulp_ctx->cfg_data->dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*tbl_scope_id = ulp_ctx->cfg_data->tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->tbl_scope_id = tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the tfp session details from the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return -EINVAL;
+	}
+
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	ulp->g_tfp = tfp;
+	return 0;
+}
+
+/* Function to get the tfp session details from the ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	return ulp->g_tfp;
+}
+
+/*
+ * Get the device table entry based on the device id.
+ *
+ * dev_id [in] The device id of the hardware
+ *
+ * Returns the pointer to the device parameters.
+ */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id)
+{
+	if (dev_id < BNXT_ULP_MAX_NUM_DEVICES)
+		return &ulp_device_params[dev_id];
+	return NULL;
+}
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->flow_db = flow_db;
+	return 0;
+}
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return NULL;
+	}
+
+	return ulp_ctx->cfg_data->flow_db;
+}
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev	*dev)
+{
+	struct bnxt	*bp;
+
+	bp = (struct bnxt *)dev->data->dev_private;
+	if (!bp) {
+		BNXT_TF_DBG(ERR, "Bnxt private data is not initialized\n");
+		return NULL;
+	}
+	return &bp->ulp_ctx;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
new file mode 100644
index 0000000..d88225f
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_ULP_H_
+#define _BNXT_ULP_H_
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include "rte_ethdev.h"
+
+struct bnxt_ulp_data {
+	uint32_t			tbl_scope_id;
+	struct bnxt_ulp_mark_tbl	*mark_tbl;
+	uint32_t			dev_id; /* Hardware device id */
+	uint32_t			ref_cnt;
+	struct bnxt_ulp_flow_db		*flow_db;
+};
+
+struct bnxt_ulp_context {
+	struct bnxt_ulp_data	*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf		*g_tfp;
+};
+
+struct bnxt_ulp_pci_info {
+	uint32_t	domain;
+	uint8_t		bus;
+};
+
+struct bnxt_ulp_session_state {
+	STAILQ_ENTRY(bnxt_ulp_session_state)	next;
+	bool					bnxt_ulp_init;
+	pthread_mutex_t				bnxt_ulp_mutex;
+	struct bnxt_ulp_pci_info		pci_info;
+	struct bnxt_ulp_data			*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf				*g_tfp;
+	uint32_t				session_opened;
+};
+
+/* ULP flow id structure */
+struct rte_tf_flow {
+	uint32_t	flow_id;
+};
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx, uint32_t *dev_id);
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id);
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id);
+
+/* Function to set the tfp session details in the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp);
+
+/* Function to get the tfp session details from ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp);
+
+/* Get the device table entry based on the device id. */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id);
+
+int32_t
+bnxt_ulp_ctxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+			       struct bnxt_ulp_mark_tbl *mark_tbl);
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_ctxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db);
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx);
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev *dev);
+
+#endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
new file mode 100644
index 0000000..3dd39c1
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_malloc.h>
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Helper function to allocate the flow table and initialize
+ * the stack for allocation operations.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+static int32_t
+ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			   enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	uint32_t			idx = 0;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	uint32_t			size;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	size = sizeof(struct ulp_fdb_resource_info) * flow_tbl->num_resources;
+	flow_tbl->flow_resources =
+			rte_zmalloc("ulp_fdb_resource_info", size, 0);
+
+	if (!flow_tbl->flow_resources) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory for flow table\n");
+		return -ENOMEM;
+	}
+	size = sizeof(uint32_t) * flow_tbl->num_resources;
+	flow_tbl->flow_tbl_stack = rte_zmalloc("flow_tbl_stack", size, 0);
+	if (!flow_tbl->flow_tbl_stack) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory flow tbl stack\n");
+		return -ENOMEM;
+	}
+	size = (flow_tbl->num_flows / sizeof(uint64_t)) + 1;
+	flow_tbl->active_flow_tbl = rte_zmalloc("active flow tbl", size, 0);
+	if (!flow_tbl->active_flow_tbl) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory active tbl\n");
+		return -ENOMEM;
+	}
+
+	/* Initialize the stack table. */
+	for (idx = 0; idx < flow_tbl->num_resources; idx++)
+		flow_tbl->flow_tbl_stack[idx] = idx;
+
+	/* Ignore the first element in the list. */
+	flow_tbl->head_index = 1;
+	/* Tail points to the last entry in the list. */
+	flow_tbl->tail_index = flow_tbl->num_resources - 1;
+	return 0;
+}
+
+/*
+ * Helper function to de allocate the flow table.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns none.
+ */
+static void
+ulp_flow_db_dealloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			     enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* Free all the allocated tables in the flow table. */
+	if (flow_tbl->active_flow_tbl) {
+		rte_free(flow_tbl->active_flow_tbl);
+		flow_tbl->active_flow_tbl = NULL;
+	}
+
+	if (flow_tbl->flow_tbl_stack) {
+		rte_free(flow_tbl->flow_tbl_stack);
+		flow_tbl->flow_tbl_stack = NULL;
+	}
+
+	if (flow_tbl->flow_resources) {
+		rte_free(flow_tbl->flow_resources);
+		flow_tbl->flow_resources = NULL;
+	}
+}
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_device_params		*dparms;
+	struct bnxt_ulp_flow_tbl		*flow_tbl;
+	struct bnxt_ulp_flow_db			*flow_db;
+	uint32_t				dev_id;
+
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctxt, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	flow_db = rte_zmalloc("bnxt_ulp_flow_db",
+			      sizeof(struct bnxt_ulp_flow_db), 0);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate memory for flow table ptr\n");
+		goto error_free;
+	}
+
+	/* Attach the flow database to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, flow_db);
+
+	/* Populate the regular flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_REGULAR_FLOW_TABLE];
+	flow_tbl->num_flows = dparms->num_flows + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   dparms->num_resources_per_flow);
+
+	/* Populate the default flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_DEFAULT_FLOW_TABLE];
+	flow_tbl->num_flows = BNXT_FLOW_DB_DEFAULT_NUM_FLOWS + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES);
+
+	/* Allocate the resource for the regular flow table. */
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE))
+		goto error_free;
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE))
+		goto error_free;
+
+	/* All good so return. */
+	return 0;
+error_free:
+	ulp_flow_db_deinit(ulp_ctxt);
+	return -ENOMEM;
+}
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_flow_db			*flow_db;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Detach the flow database from the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, NULL);
+
+	/* Free up all the memory. */
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE);
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE);
+	rte_free(flow_db);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
new file mode 100644
index 0000000..a2ee8fa
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FLOW_DB_H_
+#define _ULP_FLOW_DB_H_
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db.h"
+
+#define BNXT_FLOW_DB_DEFAULT_NUM_FLOWS		128
+#define BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES	5
+
+/* Structure for the flow database resource information. */
+struct ulp_fdb_resource_info {
+	/* Points to next resource in the chained list. */
+	uint32_t	nxt_resource_idx;
+	union {
+		uint64_t	resource_em_handle;
+		struct {
+			uint32_t	resource_type;
+			uint32_t	resource_hndl;
+		};
+	};
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_tbl {
+	/* Flow tbl is the resource object list for each flow id. */
+	struct ulp_fdb_resource_info	*flow_resources;
+
+	/* Flow table stack to track free list of resources. */
+	uint32_t	*flow_tbl_stack;
+	uint32_t	head_index;
+	uint32_t	tail_index;
+
+	/* Table to track the active flows. */
+	uint64_t	*active_flow_tbl;
+	uint32_t	num_flows;
+	uint32_t	num_resources;
+};
+
+/* Flow database supports two tables. */
+enum bnxt_ulp_flow_db_tables {
+	BNXT_ULP_REGULAR_FLOW_TABLE,
+	BNXT_ULP_DEFAULT_FLOW_TABLE,
+	BNXT_ULP_FLOW_TABLE_MAX
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_db {
+	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
+};
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
+
+#endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
new file mode 100644
index 0000000..3f28a73
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include "bnxt_ulp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "bnxt_tf_common.h"
+#include "../bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ *
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	struct bnxt_ulp_mark_tbl *mark_tbl = NULL;
+	uint32_t dev_id;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	mark_tbl = rte_zmalloc("ulp_rx_mark_tbl_ptr",
+			       sizeof(struct bnxt_ulp_mark_tbl), 0);
+	if (!mark_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->lfid_tbl = rte_zmalloc("ulp_rx_em_flow_mark_table",
+					 dparms->lfid_entries *
+					    sizeof(struct bnxt_lfid_mark_info),
+					 0);
+
+	if (!mark_tbl->lfid_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
+					 2 * dparms->num_flows *
+					    sizeof(struct bnxt_gfid_mark_info),
+					 0);
+	if (!mark_tbl->gfid_tbl)
+		goto mem_error;
+
+	/*
+	 * TBD: This needs to be generalized for better mark handling
+	 * These values are used to compress the FID to the allowable index
+	 * space.  The FID from hw may be the full hash.
+	 */
+	mark_tbl->gfid_max	= dparms->gfid_entries - 1;
+	mark_tbl->gfid_mask	= (dparms->gfid_entries / 2) - 1;
+	mark_tbl->gfid_type_bit = (dparms->gfid_entries / 2);
+
+	BNXT_TF_DBG(DEBUG, "GFID Max = 0x%08x\nGFID MASK = 0x%08x\n",
+		    mark_tbl->gfid_max,
+		    mark_tbl->gfid_mask);
+
+	/* Add the mart tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
+
+	return 0;
+
+mem_error:
+	rte_free(mark_tbl->gfid_tbl);
+	rte_free(mark_tbl->lfid_tbl);
+	rte_free(mark_tbl);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for mark mgr\n");
+
+	return -ENOMEM;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
new file mode 100644
index 0000000..b175abd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MARK_MGR_H_
+#define _ULP_MARK_MGR_H_
+
+#include "bnxt_ulp.h"
+
+#define ULP_MARK_INVALID (0)
+struct bnxt_lfid_mark_info {
+	uint16_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_gfid_mark_info {
+	uint32_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_ulp_mark_tbl {
+	struct bnxt_lfid_mark_info	*lfid_tbl;
+	struct bnxt_gfid_mark_info	*gfid_tbl;
+	uint32_t			gfid_mask;
+	uint32_t			gfid_type_bit;
+	uint32_t			gfid_max;
+};
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * Initialize MARK database for GFID & LFID tables
+ * GFID: Global flow id which is based on EEM hash id.
+ * LFID: Local flow id which is the CFA action pointer.
+ * GFID is used for EEM flows, LFID is used for EM flows.
+ *
+ * Flow mapper modules adds mark_id in the MARK database.
+ *
+ * BNXT PMD receive handler extracts the hardware flow id from the
+ * received completion record. Fetches mark_id from the MARK
+ * database using the flow id. Injects mark_id into the packet's mbuf.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
new file mode 100644
index 0000000..9670635
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+struct bnxt_ulp_device_params ulp_device_params[] = {
+	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
+		.global_fid_enable       = BNXT_ULP_SYM_YES,
+		.byte_order              = (enum bnxt_ulp_byte_order)
+						BNXT_ULP_SYM_LITTLE_ENDIAN,
+		.encap_byte_swap         = 1,
+		.lfid_entries            = 16384,
+		.lfid_entry_size         = 4,
+		.gfid_entries            = 65536,
+		.gfid_entry_size         = 4,
+		.num_flows               = 32768,
+		.num_resources_per_flow  = 8
+	}
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
new file mode 100644
index 0000000..ba2a101
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#ifndef ULP_TEMPLATE_DB_H_
+#define ULP_TEMPLATE_DB_H_
+
+#define BNXT_ULP_MAX_NUM_DEVICES 4
+
+enum bnxt_ulp_byte_order {
+	BNXT_ULP_BYTE_ORDER_BE,
+	BNXT_ULP_BYTE_ORDER_LE,
+	BNXT_ULP_BYTE_ORDER_LAST
+};
+
+enum bnxt_ulp_device_id {
+	BNXT_ULP_DEVICE_ID_WH_PLUS,
+	BNXT_ULP_DEVICE_ID_THOR,
+	BNXT_ULP_DEVICE_ID_STINGRAY,
+	BNXT_ULP_DEVICE_ID_STINGRAY2,
+	BNXT_ULP_DEVICE_ID_LAST
+};
+
+enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_YES = 1
+};
+
+#endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
new file mode 100644
index 0000000..4b9d0b2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_TEMPLATE_STRUCT_H_
+#define _ULP_TEMPLATE_STRUCT_H_
+
+#include <stdint.h>
+#include "rte_ether.h"
+#include "rte_icmp.h"
+#include "rte_ip.h"
+#include "rte_tcp.h"
+#include "rte_udp.h"
+#include "rte_esp.h"
+#include "rte_sctp.h"
+#include "rte_flow.h"
+#include "tf_core.h"
+
+/* Device specific parameters. */
+struct bnxt_ulp_device_params {
+	uint8_t				description[16];
+	uint32_t			global_fid_enable;
+	enum bnxt_ulp_byte_order	byte_order;
+	uint8_t				encap_byte_swap;
+	uint32_t			lfid_entries;
+	uint32_t			lfid_entry_size;
+	uint64_t			gfid_entries;
+	uint32_t			gfid_entry_size;
+	uint64_t			num_flows;
+	uint32_t			num_resources_per_flow;
+};
+
+/*
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
+ */
+extern struct bnxt_ulp_device_params ulp_device_params[];
+
+#endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 17/34] net/bnxt: add support for ULP session manager cleanup
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (15 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
                       ` (17 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

This patch adds support for cleaning up resources initialized for ULP
sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c         |   3 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c     | 167 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h     |  10 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  25 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |   8 ++
 5 files changed, 212 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1703ce3..2f08921 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -951,6 +951,9 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	if (bp->truflow)
+		bnxt_ulp_deinit(bp);
+
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 7afc6bf..3795c6d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -28,6 +28,27 @@ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
 static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 /*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *ptr)
+{
+	struct bnxt *bp = (struct bnxt *)ptr;
+
+	if (!bp)
+		return 0;
+
+	if (&bp->tfp == bp->ulp_ctx.g_tfp)
+		return 1;
+
+	return 0;
+}
+
+/*
  * Initialize an ULP session.
  * An ULP session will contain all the resources needed to support rte flow
  * offloads. A session is initialized as part of rte_eth_device start.
@@ -67,6 +88,22 @@ ulp_ctx_session_open(struct bnxt *bp,
 	return rc;
 }
 
+/*
+ * Close the ULP session.
+ * It takes the ulp context pointer.
+ */
+static void
+ulp_ctx_session_close(struct bnxt *bp,
+		      struct bnxt_ulp_session_state *session)
+{
+	/* close the session in the hardware */
+	if (session->session_opened)
+		tf_close_session(&bp->tfp);
+	session->session_opened = 0;
+	session->g_tfp = NULL;
+	bp->ulp_ctx.g_tfp = NULL;
+}
+
 static void
 bnxt_init_tbl_scope_parms(struct bnxt *bp,
 			  struct tf_alloc_tbl_scope_parms *params)
@@ -138,6 +175,41 @@ ulp_eem_tbl_scope_init(struct bnxt *bp)
 	return 0;
 }
 
+/* Free Extended Exact Match host memory */
+static int32_t
+ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
+{
+	struct tf_free_tbl_scope_parms	params = {0};
+	struct tf			*tfp;
+	int32_t				rc = 0;
+
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return -EINVAL;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return -EINVAL;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
+		return -EINVAL;
+	}
+
+	rc = tf_free_tbl_scope(tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to free table scope\n");
+		return -EINVAL;
+	}
+	return rc;
+}
+
 /* The function to free and deinit the ulp context data. */
 static int32_t
 ulp_ctx_deinit(struct bnxt *bp,
@@ -148,6 +220,9 @@ ulp_ctx_deinit(struct bnxt *bp,
 		return -EINVAL;
 	}
 
+	/* close the tf session */
+	ulp_ctx_session_close(bp, session);
+
 	/* Free the contents */
 	if (session->cfg_data) {
 		rte_free(session->cfg_data);
@@ -211,6 +286,36 @@ ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
 	return 0;
 }
 
+static int32_t
+ulp_ctx_detach(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+
+	if (!bp || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	ulp_ctx = &bp->ulp_ctx;
+
+	if (!ulp_ctx->cfg_data)
+		return 0;
+
+	/* TBD call TF_session_detach */
+
+	/* Increment the ulp context data reference count usage. */
+	if (ulp_ctx->cfg_data->ref_cnt >= 1) {
+		ulp_ctx->cfg_data->ref_cnt--;
+		if (ulp_ctx_deinit_allowed(bp))
+			ulp_ctx_deinit(bp, session);
+		ulp_ctx->cfg_data = NULL;
+		ulp_ctx->g_tfp = NULL;
+		return 0;
+	}
+	BNXT_TF_DBG(ERR, "context deatach on invalid data\n");
+	return 0;
+}
+
 /*
  * Initialize the state of an ULP session.
  * If the state of an ULP session is not initialized, set it's state to
@@ -297,6 +402,26 @@ ulp_session_init(struct bnxt *bp,
 }
 
 /*
+ * When a device is closed, remove it's associated session from the global
+ * session list.
+ */
+static void
+ulp_session_deinit(struct bnxt_ulp_session_state *session)
+{
+	if (!session)
+		return;
+
+	if (!session->cfg_data) {
+		pthread_mutex_lock(&bnxt_ulp_global_mutex);
+		STAILQ_REMOVE(&bnxt_ulp_session_list, session,
+			      bnxt_ulp_session_state, next);
+		pthread_mutex_destroy(&session->bnxt_ulp_mutex);
+		rte_free(session);
+		pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	}
+}
+
+/*
  * When a port is initialized by dpdk. This functions is called
  * and this function initializes the ULP context and rest of the
  * infrastructure associated with it.
@@ -363,12 +488,52 @@ bnxt_ulp_init(struct bnxt *bp)
 	return rc;
 
 jump_to_error:
+	bnxt_ulp_deinit(bp);
 	return -ENOMEM;
 }
 
 /* Below are the access functions to access internal data of ulp context. */
 
-/* Function to set the Mark DB into the context. */
+/*
+ * When a port is deinit'ed by dpdk. This function is called
+ * and this function clears the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+void
+bnxt_ulp_deinit(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state	*session;
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+
+	/* Get the session first */
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+	session = ulp_get_session(pci_addr);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+
+	/* session not found then just exit */
+	if (!session)
+		return;
+
+	/* cleanup the eem table scope */
+	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
+
+	/* cleanup the flow database */
+	ulp_flow_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the Mark database */
+	ulp_mark_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the ulp context and tf session */
+	ulp_ctx_detach(bp, session);
+
+	/* Finally delete the bnxt session*/
+	ulp_session_deinit(session);
+}
+
+/* Function to set the Mark DB into the context */
 int32_t
 bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
 				struct bnxt_ulp_mark_tbl *mark_tbl)
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index d88225f..b3e9e96 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -47,6 +47,16 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+/*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *bp);
+
 /* Function to set the device id of the hardware. */
 int32_t
 bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 3f28a73..9e4307e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -92,3 +92,28 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	return -ENOMEM;
 }
+
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+
+	if (mtbl) {
+		rte_free(mtbl->gfid_tbl);
+		rte_free(mtbl->lfid_tbl);
+		rte_free(mtbl);
+
+		/* Safe to ignore on deinit */
+		(void)bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, NULL);
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index b175abd..5948683 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -46,4 +46,12 @@ struct bnxt_ulp_mark_tbl {
 int32_t
 ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 18/34] net/bnxt: add helper functions for blob/regfile ops
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (16 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
                       ` (16 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

1. blob routines for managing key/mask/result data
2. regfile routines for managing temporal data during flow
   construction

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   2 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.h |  12 +
 drivers/net/bnxt/tf_ulp/ulp_utils.c       | 521 ++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h       | 279 ++++++++++++++++
 4 files changed, 814 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index bb9b888..4e0dea1 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -61,6 +61,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index ba2a101..1eed828 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -27,6 +27,18 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST
 };
 
+enum bnxt_ulp_fmf_mask {
+	BNXT_ULP_FMF_MASK_IGNORE,
+	BNXT_ULP_FMF_MASK_ANY,
+	BNXT_ULP_FMF_MASK_EXACT,
+	BNXT_ULP_FMF_MASK_WILDCARD,
+	BNXT_ULP_FMF_MASK_LAST
+};
+
+enum bnxt_ulp_regfile_index {
+	BNXT_ULP_REGFILE_INDEX_LAST
+};
+
 enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
new file mode 100644
index 0000000..1d463cd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -0,0 +1,521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_utils.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile)
+{
+	/* validate the arguments */
+	if (!regfile) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memset(regfile, 0, sizeof(struct ulp_regfile));
+	return 1; /* Success */
+}
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance. Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * data [in/out]
+ *
+ * returns size, zero on failure
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	*data = regfile->entry[field].data;
+	return sizeof(*data);
+}
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * size [in] The size in bytes of the value beingritten into this
+ * variable.
+ *
+ * returns 0 on fail
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	regfile->entry[field].data = data;
+	return sizeof(data); /* Success */
+}
+
+static void
+ulp_bs_put_msb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	int8_t shift;
+
+	tmp = bs[index];
+	mask = ((uint8_t)-1 >> (8 - bitlen));
+	shift = 8 - bitoffs - bitlen;
+	val &= mask;
+
+	if (shift >= 0) {
+		tmp &= ~(mask << shift);
+		tmp |= val << shift;
+		bs[index] = tmp;
+	} else {
+		tmp &= ~((uint8_t)-1 >> bitoffs);
+		tmp |= val >> -shift;
+		bs[index++] = tmp;
+
+		tmp = bs[index];
+		tmp &= ((uint8_t)-1 >> (bitlen - (8 - bitoffs)));
+		tmp |= val << (8 + shift);
+		bs[index] = tmp;
+	}
+}
+
+static void
+ulp_bs_put_lsb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	uint8_t shift;
+	uint8_t partial;
+
+	tmp = bs[index];
+	shift = bitoffs;
+
+	if (bitoffs + bitlen <= 8) {
+		mask = ((1 << bitlen) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index] = tmp;
+	} else {
+		partial = 8 - bitoffs;
+		mask = ((1 << partial) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index++] = tmp;
+
+		val >>= partial;
+		partial = bitlen - partial;
+		mask = ((1 << partial) - 1);
+		tmp = bs[index];
+		tmp &= ~mask;
+		tmp |= (val & mask);
+		bs[index] = tmp;
+	}
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_lsb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len) / 8;
+	int tlen = len;
+
+	if (cnt > 0 && !(len % 8))
+		cnt -= 1;
+
+	for (i = 0; i < cnt; i++) {
+		ulp_bs_put_lsb(bs, pos, 8, val[cnt - i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	/* Handle the remainder bits */
+	if (tlen)
+		ulp_bs_put_lsb(bs, pos, tlen, val[0]);
+	return len;
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_msb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len + 7) / 8;
+	int tlen = len;
+
+	/* Handle any remainder bits */
+	int tmp = len % 8;
+
+	if (!tmp)
+		tmp = 8;
+
+	ulp_bs_put_msb(bs, pos, tmp, val[0]);
+
+	pos += tmp;
+	tlen -= tmp;
+
+	for (i = 1; i < cnt; i++) {
+		ulp_bs_put_msb(bs, pos, 8, val[i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	return len;
+}
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order)
+{
+	/* validate the arguments */
+	if (!blob || bitlen > (8 * sizeof(blob->data))) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	blob->bitlen = bitlen;
+	blob->byte_order = order;
+	blob->write_idx = 0;
+	memset(blob->data, 0, sizeof(blob->data));
+	return 1; /* Success */
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+#define ULP_BLOB_BYTE		8
+#define ULP_BLOB_BYTE_HEX	0xFF
+#define BLOB_MASK_CAL(x)	((0xFF << (x)) & 0xFF)
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen)
+{
+	uint32_t rc;
+
+	/* validate the arguments */
+	if (!blob || datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	if (blob->byte_order == BNXT_ULP_BYTE_ORDER_BE)
+		rc = ulp_bs_push_msb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	else
+		rc = ulp_bs_push_lsb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Failed ro write blob\n");
+		return 0;
+	}
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen)
+{
+	uint8_t *val = (uint8_t *)data;
+	int rc;
+
+	int size = (datalen + 7) / 8;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	rc = ulp_blob_push(blob, &val[8 - size], datalen);
+	if (!rc)
+		return 0;
+
+	return &val[8 - size];
+}
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen)
+{
+	uint8_t		*val = (uint8_t *)data;
+	uint32_t	initial_size, write_size = datalen;
+	uint32_t	size = 0;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	initial_size = ULP_BYTE_2_BITS(sizeof(uint64_t)) -
+	    (blob->write_idx % ULP_BYTE_2_BITS(sizeof(uint64_t)));
+	while (write_size > 0) {
+		if (initial_size && write_size > initial_size) {
+			size = initial_size;
+			initial_size = 0;
+		} else if (initial_size && write_size <= initial_size) {
+			size = write_size;
+			initial_size = 0;
+		} else if (write_size > ULP_BYTE_2_BITS(sizeof(uint64_t))) {
+			size = ULP_BYTE_2_BITS(sizeof(uint64_t));
+		} else {
+			size = write_size;
+		}
+		if (!ulp_blob_push(blob, val, size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return 0;
+		}
+		val += ULP_BITS_2_BYTE(size);
+		write_size -= size;
+	}
+	return datalen;
+}
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen)
+{
+	if (datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+		return 0;
+	}
+
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return NULL; /* failure */
+	}
+	*datalen = blob->write_idx;
+	return blob->data;
+}
+
+/*
+ * Set the encap swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	blob->encap_swap_idx = blob->write_idx;
+}
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob)
+{
+	uint32_t		i, idx = 0, end_idx = 0;
+	uint8_t		temp_val_1, temp_val_2;
+
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
+
+	while (idx <= end_idx) {
+		for (i = 0; i < 4; i = i + 2) {
+			temp_val_1 = blob->data[idx + i];
+			temp_val_2 = blob->data[idx + i + 1];
+			blob->data[idx + i] = blob->data[idx + 6 - i];
+			blob->data[idx + i + 1] = blob->data[idx + 7 - i];
+			blob->data[idx + 7 - i] = temp_val_2;
+			blob->data[idx + 6 - i] = temp_val_1;
+		}
+		idx += 8;
+	}
+}
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bytes [in] The number of bytes to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bytes)
+{
+	/* validate the arguments */
+	if (!operand || !val) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memcpy(val, operand, bytes);
+	return bytes;
+}
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size)
+{
+	uint16_t	idx = 0;
+
+	/* copy 2 bytes at a time. Write MSB to LSB */
+	while ((idx + sizeof(uint16_t)) <= size) {
+		memcpy(&dst[idx], &src[size - idx - sizeof(uint16_t)],
+		       sizeof(uint16_t));
+		idx += sizeof(uint16_t);
+	}
+}
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ *
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size)
+{
+	return buf[0] == 0 && !memcmp(buf, buf + 1, size - 1);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.h b/drivers/net/bnxt/tf_ulp/ulp_utils.h
new file mode 100644
index 0000000..db88546
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_UTILS_H_
+#define _ULP_UTILS_H_
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are power of 2.
+ */
+#define ULP_BITMAP_SET(bitmap, val)	((bitmap) |= (val))
+#define ULP_BITMAP_RESET(bitmap, val)	((bitmap) &= ~(val))
+#define ULP_BITMAP_ISSET(bitmap, val)	((bitmap) & (val))
+#define ULP_BITSET_CMP(b1, b2)  memcmp(&(b1)->bits, \
+				&(b2)->bits, sizeof((b1)->bits))
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are not power of 2 and
+ * are simple index values.
+ */
+#define ULP_INDEX_BITMAP_SIZE	(sizeof(uint64_t) * 8)
+#define ULP_INDEX_BITMAP_CSET(i)	(1UL << \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))
+
+#define ULP_INDEX_BITMAP_SET(b, i)	((b) |= \
+			(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))))
+
+#define ULP_INDEX_BITMAP_RESET(b, i)	((b) &= \
+			(~(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))))
+
+#define ULP_INDEX_BITMAP_GET(b, i)		(((b) >> \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))) & 1)
+
+#define ULP_DEVICE_PARAMS_INDEX(tid, dev_id)	\
+	(((tid) << BNXT_ULP_LOG2_MAX_NUM_DEV) | (dev_id))
+
+/* Macro to convert bytes to bits */
+#define ULP_BYTE_2_BITS(byte_x)		((byte_x) * 8)
+/* Macro to convert bits to bytes */
+#define ULP_BITS_2_BYTE(bits_x)		(((bits_x) + 7) / 8)
+/* Macro to convert bits to bytes with no round off*/
+#define ULP_BITS_2_BYTE_NR(bits_x)	((bits_x) / 8)
+
+/*
+ * Making the blob statically sized to 128 bytes for now.
+ * The blob must be initialized with ulp_blob_init prior to using.
+ */
+#define BNXT_ULP_FLMP_BLOB_SIZE	(128)
+#define BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS	ULP_BYTE_2_BITS(BNXT_ULP_FLMP_BLOB_SIZE)
+struct ulp_blob {
+	enum bnxt_ulp_byte_order	byte_order;
+	uint16_t			write_idx;
+	uint16_t			bitlen;
+	uint8_t				data[BNXT_ULP_FLMP_BLOB_SIZE];
+	uint16_t			encap_swap_idx;
+};
+
+/*
+ * The data can likely be only 32 bits for now.  Just size check
+ * the data when being written.
+ */
+#define ULP_REGFILE_ENTRY_SIZE	(sizeof(uint32_t))
+struct ulp_regfile_entry {
+	uint64_t	data;
+	uint32_t	size;
+};
+
+struct ulp_regfile {
+	struct ulp_regfile_entry entry[BNXT_ULP_REGFILE_INDEX_LAST];
+};
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile);
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * returns the byte array
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data);
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * returns zero on error
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data);
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, ptr to pushed data otherwise
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen);
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen);
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen);
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen);
+
+/*
+ * Set the 64 bit swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob);
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob);
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bitlen [in] The number of bits to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bitlen);
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size);
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size);
+
+#endif /* _ULP_UTILS_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 19/34] net/bnxt: add support to process action tables
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (17 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
                       ` (15 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch processes the action template. Iterates through the list
of action info templates and processes it.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 136 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  25 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 364 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  39 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 245 +++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     | 104 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  48 +++-
 8 files changed, 959 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 4e0dea1..f464d9e 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -62,6 +62,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 3dd39c1..6e73f25 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -7,7 +7,68 @@
 #include "bnxt.h"
 #include "bnxt_tf_common.h"
 #include "ulp_flow_db.h"
+#include "ulp_utils.h"
 #include "ulp_template_struct.h"
+#include "ulp_mapper.h"
+
+#define ULP_FLOW_DB_RES_DIR_BIT		31
+#define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
+#define ULP_FLOW_DB_RES_FUNC_BITS	28
+#define ULP_FLOW_DB_RES_FUNC_MASK	0x70000000
+#define ULP_FLOW_DB_RES_NXT_MASK	0x0FFFFFFF
+
+/* Macro to copy the nxt_resource_idx */
+#define ULP_FLOW_DB_RES_NXT_SET(dst, src)	{(dst) |= ((src) &\
+					 ULP_FLOW_DB_RES_NXT_MASK); }
+#define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
+
+/*
+ * Helper function to allocate the flow table and initialize
+ *  is set.No validation being done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ *
+ * returns 1 on set or 0 if not set.
+ */
+static int32_t
+ulp_flow_db_active_flow_is_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			       uint32_t			idx)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	return ULP_INDEX_BITMAP_GET(flow_tbl->active_flow_tbl[active_index],
+				    idx);
+}
+
+/*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [out] Ptr to resource information
+ * params [in] The input params from the caller
+ * returns none
+ */
+static void
+ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	resource_info->nxt_resource_idx |= ((params->direction <<
+				      ULP_FLOW_DB_RES_DIR_BIT) &
+				     ULP_FLOW_DB_RES_DIR_MASK);
+	resource_info->nxt_resource_idx |= ((params->resource_func <<
+					     ULP_FLOW_DB_RES_FUNC_BITS) &
+					    ULP_FLOW_DB_RES_FUNC_MASK);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
+		resource_info->resource_type = params->resource_type;
+
+	} else {
+		resource_info->resource_em_handle = params->resource_hndl;
+	}
+}
 
 /*
  * Helper function to allocate the flow table and initialize
@@ -185,3 +246,78 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 
 	return 0;
 }
+
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*resource, *fid_resource;
+	uint32_t			idx;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	/* check for max resource */
+	if ((flow_tbl->num_flows + 1) >= flow_tbl->tail_index) {
+		BNXT_TF_DBG(ERR, "Flow db has reached max resources\n");
+		return -ENOMEM;
+	}
+	fid_resource = &flow_tbl->flow_resources[fid];
+
+	if (!params->critical_resource) {
+		/* Not the critical_resource so allocate a resource */
+		idx = flow_tbl->flow_tbl_stack[flow_tbl->tail_index];
+		resource = &flow_tbl->flow_resources[idx];
+		flow_tbl->tail_index--;
+
+		/* Update the chain list of resource*/
+		ULP_FLOW_DB_RES_NXT_SET(resource->nxt_resource_idx,
+					fid_resource->nxt_resource_idx);
+		/* update the contents */
+		ulp_flow_db_res_params_to_info(resource, params);
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					idx);
+	} else {
+		/* critical resource. Just update the fid resource */
+		ulp_flow_db_res_params_to_info(fid_resource, params);
+	}
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index a2ee8fa..f6055a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -53,6 +53,15 @@ struct bnxt_ulp_flow_db {
 	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
 };
 
+/* flow db resource params to add resources */
+struct ulp_flow_db_res_params {
+	enum tf_dir			direction;
+	enum bnxt_ulp_resource_func	resource_func;
+	uint64_t			resource_hndl;
+	uint32_t			resource_type;
+	uint32_t			critical_resource;
+};
+
 /*
  * Initialize the flow database. Memory is allocated in this
  * call and assigned to the flow database.
@@ -74,4 +83,20 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
  */
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
new file mode 100644
index 0000000..9cfc382
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_utils.h"
+#include "bnxt_ulp.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
+/*
+ * Get the size of the action property for a given index.
+ *
+ * idx [in] The index for the action property
+ *
+ * returns the size of the action property.
+ */
+static uint32_t
+ulp_mapper_act_prop_size_get(uint32_t idx)
+{
+	if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST)
+		return 0;
+	return ulp_act_prop_map_table[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action
+ *
+ * tbl [in] A single table instance to get the results fields
+ * from num_flds [out] The number of data fields in the returned
+ * array
+ * returns array of data fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
+				 uint32_t *num_rslt_flds,
+				 uint32_t *num_encap_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_rslt_flds || !num_encap_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_rslt_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_act_result_field_list[idx];
+}
+
+static int32_t
+ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
+				struct bnxt_ulp_mapper_result_field_info *fld,
+				struct ulp_blob *blob)
+{
+	uint16_t idx, size_idx;
+	uint8_t	 *val = NULL;
+	uint64_t regval;
+	uint32_t val_size = 0, field_size = 0;
+
+	switch (fld->result_opcode) {
+	case BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT:
+		val = fld->result_operand;
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "Failed to add field\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+		field_size = ulp_mapper_act_prop_size_get(idx);
+		if (fld->field_bit_size < ULP_BYTE_2_BITS(field_size)) {
+			field_size  = field_size -
+			    ((fld->field_bit_size + 7) / 8);
+			val += field_size;
+		}
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+
+		/* get the size index next */
+		if (!ulp_operand_read(&fld->result_operand[sizeof(uint16_t)],
+				      (uint8_t *)&size_idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		size_idx = tfp_be_to_cpu_16(size_idx);
+
+		if (size_idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", size_idx);
+			return -EINVAL;
+		}
+		memcpy(&val_size, &parms->act_prop->act_details[size_idx],
+		       sizeof(uint32_t));
+		val_size = tfp_be_to_cpu_32(val_size);
+		val_size = ULP_BYTE_2_BITS(val_size);
+		ulp_blob_push_encap(blob, val, val_size);
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+
+		idx = tfp_be_to_cpu_16(idx);
+		/* Uninitialized regfile entries return 0 */
+		if (!ulp_regfile_read(parms->regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read oob\n", idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, fld->field_bit_size);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
+ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
+				struct ulp_blob *blob)
+{
+	struct ulp_flow_db_res_params		fid_parms;
+	struct tf_alloc_tbl_entry_parms		alloc_parms = { 0 };
+	struct tf_free_tbl_entry_parms		free_parms = { 0 };
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls = parms->atbls;
+	int32_t					rc = 0;
+	int32_t trc;
+	uint64_t				idx;
+
+	/* Set the allocation parameters for the table*/
+	alloc_parms.dir = atbls->direction;
+	alloc_parms.type = atbls->table_type;
+	alloc_parms.search_enable = atbls->srch_b4_alloc;
+	alloc_parms.result = ulp_blob_data_get(blob,
+					       &alloc_parms.result_sz_in_bytes);
+	if (!alloc_parms.result) {
+		BNXT_TF_DBG(ERR, "blob is not populated\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_tbl_entry(parms->tfp, &alloc_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "table type= [%d] dir = [%s] alloc failed\n",
+			    alloc_parms.type,
+			    (alloc_parms.dir == TF_DIR_RX) ? "RX" : "TX");
+		return rc;
+	}
+
+	/* Need to calculate the idx for the result record */
+	/*
+	 * TBD: Need to get the stride from tflib instead of having to
+	 * understand the constrution of the pointer
+	 */
+	uint64_t tmpidx = alloc_parms.idx;
+
+	if (atbls->table_type == TF_TBL_TYPE_EXT)
+		tmpidx = (alloc_parms.idx * TF_ACTION_RECORD_SZ) >> 4;
+	else
+		tmpidx = alloc_parms.idx;
+
+	idx = tfp_cpu_to_be_64(tmpidx);
+
+	/* Store the allocated index for future use in the regfile */
+	rc = ulp_regfile_write(parms->regfile, atbls->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "regfile[%d] write failed\n",
+			    atbls->regfile_wr_idx);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/*
+	 * The set_tbl_entry API if search is not enabled or searched entry
+	 * is not found.
+	 */
+	if (!atbls->srch_b4_alloc || !alloc_parms.hit) {
+		struct tf_set_tbl_entry_parms set_parm = { 0 };
+		uint16_t	length;
+
+		set_parm.dir	= atbls->direction;
+		set_parm.type	= atbls->table_type;
+		set_parm.idx	= alloc_parms.idx;
+		set_parm.data	= ulp_blob_data_get(blob, &length);
+		set_parm.data_sz_in_bytes = length / 8;
+
+		if (set_parm.type == TF_TBL_TYPE_EXT)
+			bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+							&set_parm.tbl_scope_id);
+		else
+			set_parm.tbl_scope_id = 0;
+
+		/* set the table entry */
+		rc = tf_set_tbl_entry(parms->tfp, &set_parm);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "table[%d][%s][%d] set failed\n",
+				    set_parm.type,
+				    (set_parm.dir == TF_DIR_RX) ? "RX" : "TX",
+				    set_parm.idx);
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= atbls->direction;
+	fid_parms.resource_func		= atbls->resource_func;
+	fid_parms.resource_type		= atbls->table_type;
+	fid_parms.resource_hndl		= alloc_parms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	return 0;
+error:
+
+	free_parms.dir	= alloc_parms.dir;
+	free_parms.type	= alloc_parms.type;
+	free_parms.idx	= alloc_parms.idx;
+
+	trc = tf_free_tbl_entry(parms->tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free table entry on failure\n");
+
+	return rc;
+}
+
+/*
+ * Function to process the action Info. Iterate through the list
+ * action info templates and process it.
+ */
+static int32_t
+ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
+			       struct bnxt_ulp_mapper_act_tbl_info *tbl)
+{
+	struct ulp_blob					blob;
+	struct bnxt_ulp_mapper_result_field_info	*flds, *fld;
+	uint32_t					num_flds = 0;
+	uint32_t					encap_flds = 0;
+	uint32_t					i;
+	int32_t						rc;
+	uint16_t					bit_size;
+
+	if (!tbl || !parms->act_prop || !parms->act_bitmap || !parms->regfile)
+		return -EINVAL;
+
+	/* use the max size if encap is enabled */
+	if (tbl->encap_num_fields)
+		bit_size = BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS;
+	else
+		bit_size = tbl->result_bit_size;
+	if (!ulp_blob_init(&blob, bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "action blob init failed\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_act_result_fields_get(tbl, &num_flds, &encap_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < (num_flds + encap_flds); i++) {
+		fld = &flds[i];
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &blob);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Action field failed\n");
+			return rc;
+		}
+		/* set the swap index if 64 bit swap is enabled */
+		if (parms->encap_byte_swap && encap_flds) {
+			if ((i + 1) == num_flds)
+				ulp_blob_encap_swap_idx_set(&blob);
+			/* if 64 bit swap is enabled perform the 64bit swap */
+			if ((i + 1) == (num_flds + encap_flds))
+				ulp_blob_perform_encap_swap(&blob);
+		}
+	}
+
+	rc = ulp_mapper_action_alloc_and_set(parms, &blob);
+	return rc;
+}
+
+/*
+ * Function to process the action template. Iterate through the list
+ * action info templates and process it.
+ */
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms->atbls || !parms->num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for template[%d][%d].\n",
+			    parms->dev_id, parms->act_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_atbls; i++) {
+		rc = ulp_mapper_action_info_process(parms, &parms->atbls[i]);
+		if (rc)
+			return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
new file mode 100644
index 0000000..adbcec2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MAPPER_H_
+#define _ULP_MAPPER_H_
+
+#include <tf_core.h>
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_ulp.h"
+#include "ulp_utils.h"
+
+/* Internal Structure for passing the arguments around */
+struct bnxt_ulp_mapper_parms {
+	uint32_t				dev_id;
+	enum bnxt_ulp_byte_order		order;
+	uint32_t				act_tid;
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls;
+	uint32_t				num_atbls;
+	uint32_t				class_tid;
+	struct bnxt_ulp_mapper_class_tbl_info	*ctbls;
+	uint32_t				num_ctbls;
+	struct ulp_rte_act_prop			*act_prop;
+	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_field		*hdr_field;
+	struct ulp_regfile			*regfile;
+	struct tf				*tfp;
+	struct bnxt_ulp_context			*ulp_ctx;
+	uint8_t					encap_byte_swap;
+	uint32_t				fid;
+	enum bnxt_ulp_flow_db_tables		tbl_idx;
+};
+
+#endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 9670635..fc77800 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,89 @@
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+uint32_t ulp_act_prop_map_table[] = {
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_VNIC] =
+		BNXT_ULP_ACT_PROP_SZ_VNIC,
+	[BNXT_ULP_ACT_PROP_IDX_VPORT] =
+		BNXT_ULP_ACT_PROP_SZ_VPORT,
+	[BNXT_ULP_ACT_PROP_IDX_MARK] =
+		BNXT_ULP_ACT_PROP_SZ_MARK,
+	[BNXT_ULP_ACT_PROP_IDX_COUNT] =
+		BNXT_ULP_ACT_PROP_SZ_COUNT,
+	[BNXT_ULP_ACT_PROP_IDX_METER] =
+		BNXT_ULP_ACT_PROP_SZ_METER,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN,
+	[BNXT_ULP_ACT_PROP_IDX_LAST] =
+		BNXT_ULP_ACT_PROP_SZ_LAST
+};
+
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
 		.global_fid_enable       = BNXT_ULP_SYM_YES,
@@ -25,3 +108,165 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 		.num_resources_per_flow  = 8
 	}
 };
+
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {BNXT_ULP_SYM_DECAP_FUNC_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP,
+	.result_operand = {(BNXT_ULP_ACT_PROP_IDX_VNIC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 1eed828..e52cc3f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -39,9 +39,113 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_LAST
 };
 
+enum bnxt_ulp_resource_func {
+	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0,
+	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 1,
+	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 2,
+	BNXT_ULP_RESOURCE_FUNC_IDENTIFIER = 3,
+	BNXT_ULP_RESOURCE_FUNC_HW_FID = 4,
+	BNXT_ULP_RESOURCE_FUNC_LAST = 5
+};
+
+enum bnxt_ulp_result_opc {
+	BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP = 1,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ = 2,
+	BNXT_ULP_RESULT_OPC_SET_TO_REGFILE = 3,
+	BNXT_ULP_RESULT_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_act_prop_sz {
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_VNIC = 4,
+	BNXT_ULP_ACT_PROP_SZ_VPORT = 4,
+	BNXT_ULP_ACT_PROP_SZ_MARK = 4,
+	BNXT_ULP_ACT_PROP_SZ_COUNT = 4,
+	BNXT_ULP_ACT_PROP_SZ_METER = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC = 8,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST = 8,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7 = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG = 8,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP = 32,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN = 32,
+	BNXT_ULP_ACT_PROP_SZ_LAST = 4
+};
+
+enum bnxt_ulp_act_prop_idx {
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ = 0,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ = 8,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE = 12,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM = 16,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE = 20,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM = 24,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM = 28,
+	BNXT_ULP_ACT_PROP_IDX_VNIC = 32,
+	BNXT_ULP_ACT_PROP_IDX_VPORT = 36,
+	BNXT_ULP_ACT_PROP_IDX_MARK = 40,
+	BNXT_ULP_ACT_PROP_IDX_COUNT = 44,
+	BNXT_ULP_ACT_PROP_IDX_METER = 48,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC = 52,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST = 60,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN = 68,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP = 72,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID = 76,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC = 80,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST = 84,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 88,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 104,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 120,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 124,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 128,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 132,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 136,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 140,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 144,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 148,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 152,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 156,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 160,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 166,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 172,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 180,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 212,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 228,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 232,
+	BNXT_ULP_ACT_PROP_IDX_LAST = 264
+};
+
 #endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4b9d0b2..2b0a3d7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,7 +17,15 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
-/* Device specific parameters. */
+/*
+ * structure to hold the action property details
+ * It is a array of 128 bytes
+ */
+struct ulp_rte_act_prop {
+	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
+};
+
+/* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
 	uint32_t			global_fid_enable;
@@ -31,10 +39,44 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_act_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	enum tf_tbl_type table_type;
+	uint8_t		direction;
+	uint8_t		srch_b4_alloc;
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	encap_num_fields;
+	uint16_t	result_num_fields;
+
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+struct bnxt_ulp_mapper_result_field_info {
+	uint8_t				name[64];
+	enum bnxt_ulp_result_opc	result_opcode;
+	uint16_t			field_bit_size;
+	uint8_t				result_operand[16];
+};
+
 /*
- * The ulp_device_params is indexed by the dev_id.
- * This table maintains the device specific parameters.
+ * The ulp_device_params is indexed by the dev_id
+ * This table maintains the device specific parameters
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
+ * record.  It uses the same structure as the result list, but is only used for
+ * actions.
+ */
+extern
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * properties.
+ */
+extern uint32_t ulp_act_prop_map_table[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 20/34] net/bnxt: add support to process key tables
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (18 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
                       ` (14 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch creates the classifier table entries for a flow.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 773 +++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   2 +
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  80 ++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  18 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 829 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       | 142 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h | 133 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  93 ++-
 8 files changed, 2062 insertions(+), 8 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 9cfc382..a041394 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -19,6 +19,9 @@
 int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
 
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -37,10 +40,65 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 /*
  * Get the list of result fields that implement the flow action
  *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the key fields from
+ *
+ * num_flds [out] The number of key fields in the returned array
+ *
+ * returns array of Key fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_class_key_field_info *
+ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			  uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->key_start_idx;
+	*num_flds	= tbl->key_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_key_field_list[idx];
+}
+
+/*
+ * Get the list of data fields that implement the flow.
+ *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the data fields from
+ *
+ * num_flds [out] The number of data fields in the returned array.
+ *
+ * Returns array of data fields, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			     uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_flds	= tbl->result_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_result_field_list[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action.
+ *
  * tbl [in] A single table instance to get the results fields
  * from num_flds [out] The number of data fields in the returned
- * array
- * returns array of data fields, or NULL on error
+ * array.
+ *
+ * Returns array of data fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
@@ -60,6 +118,106 @@ ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
 	return &ulp_act_result_field_list[idx];
 }
 
+/*
+ * Get the list of ident fields that implement the flow
+ *
+ * tbl [in] A single table instance to get the ident fields from
+ *
+ * num_flds [out] The number of ident fields in the returned array
+ *
+ * returns array of ident fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_ident_info *
+ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			    uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx = tbl->ident_start_idx;
+	*num_flds = tbl->ident_nums;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_ident_list[idx];
+}
+
+static int32_t
+ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
+			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			 struct bnxt_ulp_mapper_ident_info *ident)
+{
+	struct ulp_flow_db_res_params	fid_parms;
+	uint64_t id = 0;
+	int32_t idx;
+	struct tf_alloc_identifier_parms iparms = { 0 };
+	struct tf_free_identifier_parms free_parms = { 0 };
+	struct tf *tfp;
+	int rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get tf pointer\n");
+		return -EINVAL;
+	}
+
+	idx = ident->regfile_wr_idx;
+
+	iparms.ident_type = ident->ident_type;
+	iparms.dir = tbl->direction;
+
+	rc = tf_alloc_identifier(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc ident %s:%d failed.\n",
+			    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iparms.ident_type);
+		return rc;
+	}
+
+	id = (uint64_t)tfp_cpu_to_be_64(iparms.id);
+	if (!ulp_regfile_write(parms->regfile, idx, id)) {
+		BNXT_TF_DBG(ERR, "Regfile[%d] write failed.\n", idx);
+		rc = -EINVAL;
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func	= ident->resource_func;
+	fid_parms.resource_type	= ident->ident_type;
+	fid_parms.resource_hndl	= iparms.id;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+
+error:
+	/* Need to free the identifier */
+	free_parms.dir		= tbl->direction;
+	free_parms.ident_type	= ident->ident_type;
+	free_parms.id		= iparms.id;
+
+	(void)tf_free_identifier(tfp, &free_parms);
+
+	BNXT_TF_DBG(ERR, "Ident process failed for %s:%s\n",
+		    ident->name,
+		    (tbl->direction == TF_DIR_RX) ? "RX" : "TX");
+	return rc;
+}
+
 static int32_t
 ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 				struct bnxt_ulp_mapper_result_field_info *fld,
@@ -163,6 +321,89 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 
 /* Function to alloc action record and set the table. */
 static int32_t
+ulp_mapper_keymask_field_process(struct bnxt_ulp_mapper_parms *parms,
+				 struct bnxt_ulp_mapper_class_key_field_info *f,
+				 struct ulp_blob *blob,
+				 uint8_t is_key)
+{
+	uint64_t regval;
+	uint16_t idx, bitlen;
+	uint32_t opcode;
+	uint8_t *operand;
+	struct ulp_regfile *regfile = parms->regfile;
+	uint8_t *val = NULL;
+	struct bnxt_ulp_mapper_class_key_field_info *fld = f;
+
+	if (is_key) {
+		operand = fld->spec_operand;
+		opcode	= fld->spec_opcode;
+	} else {
+		operand = fld->mask_operand;
+		opcode	= fld->mask_opcode;
+	}
+
+	bitlen = fld->field_bit_size;
+
+	switch (opcode) {
+	case BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT:
+		val = operand;
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_ADD_PAD:
+		if (!ulp_blob_pad_push(blob, bitlen)) {
+			BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+			return -EINVAL;
+		}
+
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+		if (is_key)
+			val = parms->hdr_field[idx].spec;
+		else
+			val = parms->hdr_field[idx].mask;
+
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (!ulp_regfile_read(regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read failed.\n",
+				    idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, bitlen);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
 ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
 				struct ulp_blob *blob)
 {
@@ -338,6 +579,489 @@ ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			    struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct ulp_blob key, mask, data;
+	uint32_t i, num_kflds;
+	struct tf *tfp;
+	int32_t rc, trc;
+	struct tf_alloc_tcam_entry_parms aparms		= { 0 };
+	struct tf_set_tcam_entry_parms sparms		= { 0 };
+	struct ulp_flow_db_res_params	fid_parms	= { 0 };
+	struct tf_free_tcam_entry_parms free_parms	= { 0 };
+	uint32_t hit = 0;
+	uint16_t tmplen = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get truflow pointer\n");
+		return -EINVAL;
+	}
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	if (!ulp_blob_init(&key, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&mask, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key/mask */
+	/*
+	 * NOTE: The WC table will require some kind of flag to handle the
+	 * mode bits within the key/mask
+	 */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+
+		/* Setup the mask */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &mask, 0);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Mask field set failed.\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.tcam_tbl_type	= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.key_sz_in_bits	= tbl->key_bit_size;
+	aparms.key		= ulp_blob_data_get(&key, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Key len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Mask len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.priority		= tbl->priority;
+
+	/*
+	 * All failures after this succeeds require the entry to be freed.
+	 * cannot return directly on failure, but needs to goto error
+	 */
+	rc = tf_alloc_tcam_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "tcam alloc failed rc=%d.\n", rc);
+		return rc;
+	}
+
+	hit = aparms.hit;
+
+	/* Build the result */
+	if (!tbl->srch_b4_alloc || !hit) {
+		struct bnxt_ulp_mapper_result_field_info *dflds;
+		struct bnxt_ulp_mapper_ident_info *idents;
+		uint32_t num_dflds, num_idents;
+
+		/* Alloc identifiers */
+		idents = ulp_mapper_ident_fields_get(tbl, &num_idents);
+
+		for (i = 0; i < num_idents; i++) {
+			rc = ulp_mapper_ident_process(parms, tbl, &idents[i]);
+
+			/* Already logged the error, just return */
+			if (rc)
+				goto error;
+		}
+
+		/* Create the result data blob */
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+		if (!dflds || !num_dflds) {
+			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+			rc = -EINVAL;
+			goto error;
+		}
+
+		for (i = 0; i < num_dflds; i++) {
+			rc = ulp_mapper_result_field_process(parms,
+							     &dflds[i],
+							     &data);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Failed to set data fields\n");
+				goto error;
+			}
+		}
+
+		sparms.dir		= aparms.dir;
+		sparms.tcam_tbl_type	= aparms.tcam_tbl_type;
+		sparms.idx		= aparms.idx;
+		/* Already verified the key/mask lengths */
+		sparms.key		= ulp_blob_data_get(&key, &tmplen);
+		sparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+		sparms.key_sz_in_bits	= tbl->key_bit_size;
+		sparms.result		= ulp_blob_data_get(&data, &tmplen);
+
+		if (tbl->result_bit_size != tmplen) {
+			BNXT_TF_DBG(ERR, "Result len (%d) != Expected (%d)\n",
+				    tmplen, tbl->result_bit_size);
+			rc = -EINVAL;
+			goto error;
+		}
+		sparms.result_sz_in_bits = tbl->result_bit_size;
+
+		rc = tf_set_tcam_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "tcam[%d][%s][%d] write failed.\n",
+				    sparms.tcam_tbl_type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx);
+			goto error;
+		}
+	} else {
+		BNXT_TF_DBG(ERR, "Not supporting search before alloc now\n");
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	fid_parms.direction = tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.critical_resource = tbl->critical_resource;
+	fid_parms.resource_hndl	= aparms.idx;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir			= tbl->direction;
+	free_parms.tcam_tbl_type	= tbl->table_type;
+	free_parms.idx			= aparms.idx;
+	trc = tf_free_tcam_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tcam[%d][%d][%d] on failure\n",
+			    tbl->table_type, tbl->direction, aparms.idx);
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct bnxt_ulp_mapper_result_field_info *dflds;
+	struct ulp_blob key, data;
+	uint32_t i, num_kflds, num_dflds;
+	uint16_t tmplen;
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	struct ulp_rte_act_prop	 *a_prop = parms->act_prop;
+	struct ulp_flow_db_res_params	fid_parms = { 0 };
+	struct tf_insert_em_entry_parms iparms = { 0 };
+	struct tf_delete_em_entry_parms free_parms = { 0 };
+	int32_t	trc;
+	int32_t rc = 0;
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	/* Initialize the key/result blobs */
+	if (!ulp_blob_init(&key, tbl->blob_key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * TBD: Normally should process identifiers in case of using recycle or
+	 * loopback.  Not supporting recycle for now.
+	 */
+
+	/* Create the result data blob */
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+	if (!dflds || !num_dflds) {
+		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_dflds; i++) {
+		struct bnxt_ulp_mapper_result_field_info *fld;
+
+		fld = &dflds[i];
+
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to set data fields.\n");
+			return rc;
+		}
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+					     &iparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	/*
+	 * NOTE: the actual blob size will differ from the size in the tbl
+	 * entry due to the padding.
+	 */
+	iparms.dup_check		= 0;
+	iparms.dir			= tbl->direction;
+	iparms.mem			= tbl->mem;
+	iparms.key			= ulp_blob_data_get(&key, &tmplen);
+	iparms.key_sz_in_bits		= tbl->key_bit_size;
+	iparms.em_record		= ulp_blob_data_get(&data, &tmplen);
+	iparms.em_record_sz_in_bits	= tbl->result_bit_size;
+
+	rc = tf_insert_em_entry(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to insert em entry rc=%d.\n", rc);
+		return rc;
+	}
+
+	if (tbl->mark_enable &&
+	    ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		uint32_t val, mark, gfid, flag;
+		/* TBD: Need to determine if GFID is enabled globally */
+		if (sizeof(val) != BNXT_ULP_ACT_PROP_SZ_MARK) {
+			BNXT_TF_DBG(ERR, "Mark size (%d) != expected (%ld)\n",
+				    BNXT_ULP_ACT_PROP_SZ_MARK, sizeof(val));
+			rc = -EINVAL;
+			goto error;
+		}
+
+		memcpy(&val,
+		       &a_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       sizeof(val));
+
+		mark = tfp_be_to_cpu_32(val);
+
+		TF_GET_GFID_FROM_FLOW_ID(iparms.flow_id, gfid);
+		TF_GET_FLAG_FROM_FLOW_ID(iparms.flow_id, flag);
+
+		rc = ulp_mark_db_mark_add(parms->ulp_ctx,
+					  (flag == TF_GFID_TABLE_EXTERNAL),
+					  gfid,
+					  mark);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to add mark to flow\n");
+			goto error;
+		}
+
+		/*
+		 * Link the mark resource to the flow in the flow db
+		 * The mark is never the critical resource, so it is 0.
+		 */
+		memset(&fid_parms, 0, sizeof(fid_parms));
+		fid_parms.direction	= tbl->direction;
+		fid_parms.resource_func	= BNXT_ULP_RESOURCE_FUNC_HW_FID;
+		fid_parms.resource_type	= tbl->table_type;
+		fid_parms.resource_hndl	= iparms.flow_id;
+		fid_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+					      parms->tbl_idx,
+					      parms->fid,
+					      &fid_parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+				    rc);
+			/* Need to free the identifier, so goto error */
+			goto error;
+		}
+	}
+
+	/* Link the EM resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func		= tbl->resource_func;
+	fid_parms.resource_type		= tbl->table_type;
+	fid_parms.critical_resource	= tbl->critical_resource;
+	fid_parms.resource_hndl		= iparms.flow_handle;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir		= iparms.dir;
+	free_parms.mem		= iparms.mem;
+	free_parms.tbl_scope_id	= iparms.tbl_scope_id;
+	free_parms.flow_handle	= iparms.flow_handle;
+
+	trc = tf_delete_em_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to delete EM entry on failed add\n");
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			     struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_flow_db_res_params	fid_parms;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0, trc = 0;
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_set_tbl_entry_parms	sparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+
+	if (!ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_flds; i++) {
+		rc = ulp_mapper_result_field_process(parms,
+						     &flds[i],
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.type		= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.result		= ulp_blob_data_get(&data, &tmplen);
+	aparms.result_sz_in_bytes = ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+
+	/* All failures after the alloc succeeds require a free */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc table[%d][%s] failed rc=%d\n",
+			    tbl->table_type,
+			    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+			    rc);
+		return rc;
+	}
+
+	/* Always storing values in Regfile in BE */
+	idx = tfp_cpu_to_be_64(aparms.idx);
+	rc = ulp_regfile_write(parms->regfile, tbl->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+			    tbl->regfile_wr_idx);
+		goto error;
+	}
+
+	if (!tbl->srch_b4_alloc) {
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->table_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes =
+			ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+		sparms.idx		= aparms.idx;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+				    tbl->table_type,
+				    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction	= tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.resource_hndl	= aparms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		goto error;
+	}
+
+	return rc;
+error:
+	/*
+	 * Free the allocated resource since we failed to either
+	 * write to the entry or link the flow
+	 */
+	free_parms.dir	= tbl->direction;
+	free_parms.type	= tbl->table_type;
+	free_parms.idx	= aparms.idx;
+
+	trc = tf_free_tbl_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tbl entry on failure\n");
+
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -362,3 +1086,48 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+/* Create the classifier table entries for a flow. */
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms)
+		return -EINVAL;
+
+	if (!parms->ctbls || !parms->num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for template[%d][%d].\n",
+			    parms->dev_id, parms->class_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_ctbls; i++) {
+		struct bnxt_ulp_mapper_class_tbl_info *tbl = &parms->ctbls[i];
+
+		switch (tbl->resource_func) {
+		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+			rc = ulp_mapper_em_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_index_tbl_process(parms, tbl);
+			break;
+		default:
+			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
+				    tbl->resource_func);
+			return -EINVAL;
+		}
+
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+				    tbl->resource_func);
+			return rc;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index adbcec2..2221e12 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -15,6 +15,8 @@
 #include "bnxt_ulp.h"
 #include "ulp_utils.h"
 
+#define ULP_SZ_BITS2BYTES(x) (((x) + 7) / 8)
+
 /* Internal Structure for passing the arguments around */
 struct bnxt_ulp_mapper_parms {
 	uint32_t				dev_id;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 9e4307e..837064e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -6,14 +6,71 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_log.h>
+#include "bnxt.h"
 #include "bnxt_ulp.h"
 #include "tf_ext_flow_handle.h"
 #include "ulp_mark_mgr.h"
 #include "bnxt_tf_common.h"
-#include "../bnxt.h"
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+static inline uint32_t
+ulp_mark_db_idx_get(bool is_gfid, uint32_t fid, struct bnxt_ulp_mark_tbl *mtbl)
+{
+	uint32_t idx = 0, hashtype = 0;
+
+	if (is_gfid) {
+		TF_GET_HASH_TYPE_FROM_GFID(fid, hashtype);
+		TF_GET_HASH_INDEX_FROM_GFID(fid, idx);
+
+		/* Need to truncate anything beyond supported flows */
+		idx &= mtbl->gfid_mask;
+
+		if (hashtype)
+			idx |= mtbl->gfid_type_bit;
+	} else {
+		idx = fid;
+	}
+
+	return idx;
+}
+
+static int32_t
+ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t mark)
+{
+	struct		bnxt_ulp_mark_tbl *mtbl;
+	uint32_t	idx = 0;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark DB\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+
+		mtbl->gfid_tbl[idx].mark_id = mark;
+		mtbl->gfid_tbl[idx].valid = true;
+	} else {
+		/* For the LFID, the FID is used as the index */
+		mtbl->lfid_tbl[fid].mark_id = mark;
+		mtbl->lfid_tbl[fid].valid = true;
+	}
+
+	return 0;
+}
+
 /*
  * Allocate and Initialize all Mark Manager resources for this ulp context.
  *
@@ -117,3 +174,24 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 
 	return 0;
 }
+
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 5948683..18abea4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -54,4 +54,22 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index fc77800..aefece8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -9,6 +9,7 @@
  */
 
 #include "ulp_template_db.h"
+#include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
 
 uint32_t ulp_act_prop_map_table[] = {
@@ -109,6 +110,834 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_DMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_DMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L3_HDR_TYPE_IPV4,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L2_HDR_TYPE_DIX,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_PKT_TYPE_L2,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_ADD_PAD,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_SMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_REGFILE,
+	.spec_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00fd >> 8) & 0xff,
+		0x00fd & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x15, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 33,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00c5 >> 8) & 0xff,
+		0x00c5 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_L2_CTXT,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0,
+	.ident_bit_size = 10,
+	.ident_bit_pos = 54
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_EM_PROF,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0,
+	.ident_bit_size = 8,
+	.ident_bit_pos = 2
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index e52cc3f..733836a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,37 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 
+enum bnxt_ulp_action_bit {
+	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
+	BNXT_ULP_ACTION_BIT_DROP             = 0x0000000000000002,
+	BNXT_ULP_ACTION_BIT_COUNT            = 0x0000000000000004,
+	BNXT_ULP_ACTION_BIT_RSS              = 0x0000000000000008,
+	BNXT_ULP_ACTION_BIT_METER            = 0x0000000000000010,
+	BNXT_ULP_ACTION_BIT_VNIC             = 0x0000000000000020,
+	BNXT_ULP_ACTION_BIT_VPORT            = 0x0000000000000040,
+	BNXT_ULP_ACTION_BIT_VXLAN_DECAP      = 0x0000000000000080,
+	BNXT_ULP_ACTION_BIT_NVGRE_DECAP      = 0x0000000000000100,
+	BNXT_ULP_ACTION_BIT_OF_POP_MPLS      = 0x0000000000000200,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_MPLS     = 0x0000000000000400,
+	BNXT_ULP_ACTION_BIT_MAC_SWAP         = 0x0000000000000800,
+	BNXT_ULP_ACTION_BIT_SET_MAC_SRC      = 0x0000000000001000,
+	BNXT_ULP_ACTION_BIT_SET_MAC_DST      = 0x0000000000002000,
+	BNXT_ULP_ACTION_BIT_OF_POP_VLAN      = 0x0000000000004000,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_VLAN     = 0x0000000000008000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_PCP  = 0x0000000000010000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_VID  = 0x0000000000020000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_SRC     = 0x0000000000040000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_DST     = 0x0000000000080000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_SRC     = 0x0000000000100000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_DST     = 0x0000000000200000,
+	BNXT_ULP_ACTION_BIT_DEC_TTL          = 0x0000000000400000,
+	BNXT_ULP_ACTION_BIT_SET_TP_SRC       = 0x0000000000800000,
+	BNXT_ULP_ACTION_BIT_SET_TP_DST       = 0x0000000001000000,
+	BNXT_ULP_ACTION_BIT_VXLAN_ENCAP      = 0x0000000002000000,
+	BNXT_ULP_ACTION_BIT_NVGRE_ENCAP      = 0x0000000004000000,
+	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -35,8 +66,48 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_mark_enable {
+	BNXT_ULP_MARK_ENABLE_NO = 0,
+	BNXT_ULP_MARK_ENABLE_YES = 1,
+	BNXT_ULP_MARK_ENABLE_LAST = 2
+};
+
+enum bnxt_ulp_mask_opc {
+	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_MASK_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_MASK_OPC_ADD_PAD = 3,
+	BNXT_ULP_MASK_OPC_LAST = 4
+};
+
+enum bnxt_ulp_priority {
+	BNXT_ULP_PRIORITY_LEVEL_0 = 0,
+	BNXT_ULP_PRIORITY_LEVEL_1 = 1,
+	BNXT_ULP_PRIORITY_LEVEL_2 = 2,
+	BNXT_ULP_PRIORITY_LEVEL_3 = 3,
+	BNXT_ULP_PRIORITY_LEVEL_4 = 4,
+	BNXT_ULP_PRIORITY_LEVEL_5 = 5,
+	BNXT_ULP_PRIORITY_LEVEL_6 = 6,
+	BNXT_ULP_PRIORITY_LEVEL_7 = 7,
+	BNXT_ULP_PRIORITY_NOT_USED = 8,
+	BNXT_ULP_PRIORITY_LAST = 9
+};
+
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_LAST
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 0,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 1,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN = 8,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 9,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 10,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 11,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 12,
+	BNXT_ULP_REGFILE_INDEX_LAST = 13
 };
 
 enum bnxt_ulp_resource_func {
@@ -56,9 +127,78 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_spec_opc {
+	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_SPEC_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_SPEC_OPC_ADD_PAD = 3,
+	BNXT_ULP_SPEC_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_NO = 0,
+	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
new file mode 100644
index 0000000..1f58ace
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved_
+ */
+
+/* date: Mon Mar  9 02:37:53 2020
+ * version: 0_0
+ */
+
+#ifndef _ULP_HDR_FIELD_ENUMS_H_
+#define _ULP_HDR_FIELD_ENUMS_H_
+
+/* class_template_id = 0: ingress flow */
+enum bnxt_ulp_hf0 {
+	BNXT_ULP_HF0_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF0_O_VTAG_NUM = 1,
+	BNXT_ULP_HF0_I_VTAG_NUM = 2,
+	BNXT_ULP_HF0_SVIF_INDEX = 3,
+	BNXT_ULP_HF0_O_ETH_DMAC = 4,
+	BNXT_ULP_HF0_O_ETH_SMAC = 5,
+	BNXT_ULP_HF0_O_ETH_TYPE = 6,
+	BNXT_ULP_HF0_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF0_OO_VLAN_VID = 8,
+	BNXT_ULP_HF0_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF0_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF0_OI_VLAN_VID = 11,
+	BNXT_ULP_HF0_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF0_O_IPV4_VER = 13,
+	BNXT_ULP_HF0_O_IPV4_TOS = 14,
+	BNXT_ULP_HF0_O_IPV4_LEN = 15,
+	BNXT_ULP_HF0_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF0_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF0_O_IPV4_TTL = 18,
+	BNXT_ULP_HF0_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF0_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF0_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF0_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF0_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF0_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF0_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF0_O_UDP_CSUM = 26,
+	BNXT_ULP_HF0_T_VXLAN_FLAGS = 27,
+	BNXT_ULP_HF0_T_VXLAN_RSVD0 = 28,
+	BNXT_ULP_HF0_T_VXLAN_VNI = 29,
+	BNXT_ULP_HF0_T_VXLAN_RSVD1 = 30,
+	BNXT_ULP_HF0_I_ETH_DMAC = 31,
+	BNXT_ULP_HF0_I_ETH_SMAC = 32,
+	BNXT_ULP_HF0_I_ETH_TYPE = 33,
+	BNXT_ULP_HF0_IO_VLAN_CFI_PRI = 34,
+	BNXT_ULP_HF0_IO_VLAN_VID = 35,
+	BNXT_ULP_HF0_IO_VLAN_TYPE = 36,
+	BNXT_ULP_HF0_II_VLAN_CFI_PRI = 37,
+	BNXT_ULP_HF0_II_VLAN_VID = 38,
+	BNXT_ULP_HF0_II_VLAN_TYPE = 39,
+	BNXT_ULP_HF0_I_IPV4_VER = 40,
+	BNXT_ULP_HF0_I_IPV4_TOS = 41,
+	BNXT_ULP_HF0_I_IPV4_LEN = 42,
+	BNXT_ULP_HF0_I_IPV4_FRAG_ID = 43,
+	BNXT_ULP_HF0_I_IPV4_FRAG_OFF = 44,
+	BNXT_ULP_HF0_I_IPV4_TTL = 45,
+	BNXT_ULP_HF0_I_IPV4_NEXT_PID = 46,
+	BNXT_ULP_HF0_I_IPV4_CSUM = 47,
+	BNXT_ULP_HF0_I_IPV4_SRC_ADDR = 48,
+	BNXT_ULP_HF0_I_IPV4_DST_ADDR = 49,
+	BNXT_ULP_HF0_I_UDP_SRC_PORT = 50,
+	BNXT_ULP_HF0_I_UDP_DST_PORT = 51,
+	BNXT_ULP_HF0_I_UDP_LENGTH = 52,
+	BNXT_ULP_HF0_I_UDP_CSUM = 53
+};
+
+/* class_template_id = 1: egress flow */
+enum bnxt_ulp_hf1 {
+	BNXT_ULP_HF1_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF1_O_VTAG_NUM = 1,
+	BNXT_ULP_HF1_I_VTAG_NUM = 2,
+	BNXT_ULP_HF1_SVIF_INDEX = 3,
+	BNXT_ULP_HF1_O_ETH_DMAC = 4,
+	BNXT_ULP_HF1_O_ETH_SMAC = 5,
+	BNXT_ULP_HF1_O_ETH_TYPE = 6,
+	BNXT_ULP_HF1_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF1_OO_VLAN_VID = 8,
+	BNXT_ULP_HF1_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF1_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF1_OI_VLAN_VID = 11,
+	BNXT_ULP_HF1_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF1_O_IPV4_VER = 13,
+	BNXT_ULP_HF1_O_IPV4_TOS = 14,
+	BNXT_ULP_HF1_O_IPV4_LEN = 15,
+	BNXT_ULP_HF1_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF1_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF1_O_IPV4_TTL = 18,
+	BNXT_ULP_HF1_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF1_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF1_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF1_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF1_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF1_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF1_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF1_O_UDP_CSUM = 26
+};
+
+/* class_template_id = 2: ingress flow */
+enum bnxt_ulp_hf2 {
+	BNXT_ULP_HF2_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF2_O_VTAG_NUM = 1,
+	BNXT_ULP_HF2_I_VTAG_NUM = 2,
+	BNXT_ULP_HF2_SVIF_INDEX = 3,
+	BNXT_ULP_HF2_O_ETH_DMAC = 4,
+	BNXT_ULP_HF2_O_ETH_SMAC = 5,
+	BNXT_ULP_HF2_O_ETH_TYPE = 6,
+	BNXT_ULP_HF2_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF2_OO_VLAN_VID = 8,
+	BNXT_ULP_HF2_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF2_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF2_OI_VLAN_VID = 11,
+	BNXT_ULP_HF2_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF2_O_IPV4_VER = 13,
+	BNXT_ULP_HF2_O_IPV4_TOS = 14,
+	BNXT_ULP_HF2_O_IPV4_LEN = 15,
+	BNXT_ULP_HF2_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF2_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF2_O_IPV4_TTL = 18,
+	BNXT_ULP_HF2_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF2_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF2_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF2_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF2_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF2_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF2_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF2_O_UDP_CSUM = 26
+};
+
+#endif /* _ULP_HDR_FIELD_ENUMS_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 2b0a3d7..e28d049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,9 +17,21 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Structure to store the protocol fields */
+#define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
+struct ulp_rte_hdr_field {
+	uint8_t		spec[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint8_t		mask[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint32_t	size;
+};
+
+struct ulp_rte_act_bitmap {
+	uint64_t	bits;
+};
+
 /*
- * structure to hold the action property details
- * It is a array of 128 bytes
+ * Structure to hold the action property details.
+ * It is a array of 128 bytes.
  */
 struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
@@ -39,6 +51,35 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_class_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	uint32_t	table_type;
+	uint8_t		direction;
+	uint8_t		mem;
+	uint32_t	priority;
+	uint8_t		srch_b4_alloc;
+	uint32_t	critical_resource;
+
+	/* Information for accessing the ulp_key_field_list */
+	uint32_t	key_start_idx;
+	uint16_t	key_bit_size;
+	uint16_t	key_num_fields;
+	/* Size of the blob that holds the key */
+	uint16_t	blob_key_bit_size;
+
+	/* Information for accessing the ulp_class_result_field_list */
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	result_num_fields;
+
+	/* Information for accessing the ulp_ident_list */
+	uint32_t	ident_start_idx;
+	uint16_t	ident_nums;
+
+	uint8_t		mark_enable;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
 struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	enum tf_tbl_type table_type;
@@ -52,6 +93,15 @@ struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_regfile_index	regfile_wr_idx;
 };
 
+struct bnxt_ulp_mapper_class_key_field_info {
+	uint8_t			name[64];
+	enum bnxt_ulp_mask_opc	mask_opcode;
+	enum bnxt_ulp_spec_opc	spec_opcode;
+	uint16_t		field_bit_size;
+	uint8_t			mask_operand[16];
+	uint8_t			spec_operand[16];
+};
+
 struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				name[64];
 	enum bnxt_ulp_result_opc	result_opcode;
@@ -59,14 +109,36 @@ struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				result_operand[16];
 };
 
+struct bnxt_ulp_mapper_ident_info {
+	uint8_t		name[64];
+	uint32_t	resource_func;
+
+	uint16_t	ident_type;
+	uint16_t	ident_bit_size;
+	uint16_t	ident_bit_pos;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+/*
+ * Flow Mapper Static Data Externs:
+ * Access to the below static data should be done through access functions and
+ * directly throughout the code.
+ */
+
 /*
- * The ulp_device_params is indexed by the dev_id
- * This table maintains the device specific parameters
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
  * The ulp_data_field_list provides the instructions for creating an action
+ * records such as tcam/em results.
+ */
+extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
+
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
  * record.  It uses the same structure as the result list, but is only used for
  * actions.
  */
@@ -75,6 +147,19 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
 
 /*
  * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * tcam and em tables.
+ */
+extern
+struct bnxt_ulp_mapper_class_key_field_info	ulp_class_key_field_list[];
+
+/*
+ * The ulp_ident_list provides the instructions for creating identifiers such
+ * as profile ids.
+ */
+extern struct bnxt_ulp_mapper_ident_info	ulp_ident_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
  * properties.
  */
 extern uint32_t ulp_act_prop_map_table[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 21/34] net/bnxt: add support to free key and action tables
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (19 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
                       ` (13 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets all the flow resources from the flow id
2. Frees all the table resources
3. Frees the flow in the flow table

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c  | 199 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h  |  30 +++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c   | 193 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h   |  13 +++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  23 +++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 +++
 6 files changed, 474 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 6e73f25..eecee6b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -23,6 +23,32 @@
 #define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
 
 /*
+ * Helper function to set the bit in the active flow table
+ * No validation is done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ * flag [in] 1 to set and 0 to reset.
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_active_flow_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			    uint32_t			idx,
+			    uint32_t			flag)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	if (flag)
+		ULP_INDEX_BITMAP_SET(flow_tbl->active_flow_tbl[active_index],
+				     idx);
+	else
+		ULP_INDEX_BITMAP_RESET(flow_tbl->active_flow_tbl[active_index],
+				       idx);
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  *  is set.No validation being done in this function.
  *
@@ -71,6 +97,35 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
 }
 
 /*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [in] Ptr to resource information
+ * params [out] The output params to the caller
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	memset(params, 0, sizeof(struct ulp_flow_db_res_params));
+	params->direction = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_DIR_MASK) >>
+				 ULP_FLOW_DB_RES_DIR_BIT);
+	params->resource_func = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_FUNC_MASK) >>
+				 ULP_FLOW_DB_RES_FUNC_BITS);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		params->resource_hndl = resource_info->resource_hndl;
+		params->resource_type = resource_info->resource_type;
+	} else {
+		params->resource_hndl = resource_info->resource_em_handle;
+	}
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  * the stack for allocation operations.
  *
@@ -122,7 +177,7 @@ ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
 }
 
 /*
- * Helper function to de allocate the flow table.
+ * Helper function to deallocate the flow table.
  *
  * flow_db [in] Ptr to flow database structure
  * tbl_idx [in] The index to table creation.
@@ -321,3 +376,145 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Onlythe critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*nxt_resource, *fid_resource;
+	uint32_t			nxt_idx = 0;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	fid_resource = &flow_tbl->flow_resources[fid];
+	if (!params->critical_resource) {
+		/* Not the critical resource so free the resource */
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		if (!nxt_idx) {
+			/* reached end of resources */
+			return -ENOENT;
+		}
+		nxt_resource = &flow_tbl->flow_resources[nxt_idx];
+
+		/* connect the fid resource to the next resource */
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_resource->nxt_resource_idx);
+
+		/* update the contents to be given to caller */
+		ulp_flow_db_res_info_to_params(nxt_resource, params);
+
+		/* Delete the nxt_resource */
+		memset(nxt_resource, 0, sizeof(struct ulp_fdb_resource_info));
+
+		/* add it to the free list */
+		flow_tbl->tail_index++;
+		if (flow_tbl->tail_index >= flow_tbl->num_resources) {
+			BNXT_TF_DBG(ERR, "FlowDB:Tail reached max\n");
+			return -ENOENT;
+		}
+		flow_tbl->flow_tbl_stack[flow_tbl->tail_index] = nxt_idx;
+
+	} else {
+		/* Critical resource. copy the contents and exit */
+		ulp_flow_db_res_info_to_params(fid_resource, params);
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		memset(fid_resource, 0, sizeof(struct ulp_fdb_resource_info));
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_idx);
+	}
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for limits of fid */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+	flow_tbl->head_index--;
+	if (!flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "FlowDB: Head Ptr is zero\n");
+		return -ENOENT;
+	}
+	flow_tbl->flow_tbl_stack[flow_tbl->head_index] = fid;
+	ulp_flow_db_active_flow_set(flow_tbl, fid, 0);
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index f6055a5..20109b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -99,4 +99,34 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 				 uint32_t			fid,
 				 struct ulp_flow_db_res_params	*params);
 
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Only the critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index a041394..1b22720 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -143,6 +143,87 @@ ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
 	return &ulp_ident_list[idx];
 }
 
+static inline int32_t
+ulp_mapper_tcam_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			   struct tf *tfp,
+			   struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tcam_entry_parms fparms = {
+		.dir		= res->direction,
+		.tcam_tbl_type	= res->resource_type,
+		.idx		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_tcam_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			    struct tf *tfp,
+			    struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tbl_entry_parms fparms = {
+		.dir	= res->direction,
+		.type	= res->resource_type,
+		.idx	= (uint32_t)res->resource_hndl
+	};
+
+	return tf_free_tbl_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
+			  struct tf *tfp,
+			  struct ulp_flow_db_res_params *res)
+{
+	struct tf_delete_em_entry_parms fparms = { 0 };
+	int32_t rc;
+
+	fparms.dir		= res->direction;
+	fparms.mem		= TF_MEM_EXTERNAL;
+	fparms.flow_handle	= res->resource_hndl;
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_em_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_ident_free(struct bnxt_ulp_context *ulp __rte_unused,
+		      struct tf *tfp,
+		      struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_identifier_parms fparms = {
+		.dir		= res->direction,
+		.ident_type	= res->resource_type,
+		.id		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_identifier(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_mark_free(struct bnxt_ulp_context *ulp,
+		     struct ulp_flow_db_res_params *res)
+{
+	uint32_t flag;
+	uint32_t fid;
+	uint32_t gfid;
+
+	fid	  = (uint32_t)res->resource_hndl;
+	TF_GET_FLAG_FROM_FLOW_ID(fid, flag);
+	TF_GET_GFID_FROM_FLOW_ID(fid, gfid);
+
+	return ulp_mark_db_mark_del(ulp,
+				    (flag == TF_GFID_TABLE_EXTERNAL),
+				    gfid,
+				    0);
+}
+
 static int32_t
 ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
 			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1131,3 +1212,115 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+static int32_t
+ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
+			 struct ulp_flow_db_res_params *res)
+{
+	struct tf *tfp;
+	int32_t	rc = 0;
+
+	if (!res || !ulp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource\n ");
+		return -EINVAL;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource failed to get tfp\n");
+		return -EINVAL;
+	}
+
+	switch (res->resource_func) {
+	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
+		rc = ulp_mapper_ident_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_HW_FID:
+		rc = ulp_mapper_mark_free(ulp, res);
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type)
+{
+	struct ulp_flow_db_res_params	res_parms = { 0 };
+	int32_t				rc, trc;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the critical resource on the first resource del, then iterate
+	 * while status is good
+	 */
+	res_parms.critical_resource = 1;
+	rc = ulp_flow_db_resource_del(ulp_ctx, tbl_type, fid, &res_parms);
+
+	if (rc) {
+		/*
+		 * This is unexpected on the first call to resource del.
+		 * It likely means that the flow did not exist in the flow db.
+		 */
+		BNXT_TF_DBG(ERR, "Flow[%d][0x%08x] failed to free (rc=%d)\n",
+			    tbl_type, fid, rc);
+		return rc;
+	}
+
+	while (!rc) {
+		trc = ulp_mapper_resource_free(ulp_ctx, &res_parms);
+		if (trc)
+			/*
+			 * On fail, we still need to attempt to free the
+			 * remaining resources.  Don't return
+			 */
+			BNXT_TF_DBG(ERR,
+				    "Flow[%d][0x%x] Res[%d][0x%016" PRIx64
+				    "] failed rc=%d.\n",
+				    tbl_type, fid, res_parms.resource_func,
+				    res_parms.resource_hndl, trc);
+
+		/* All subsequent call require the critical_resource be zero */
+		res_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_del(ulp_ctx,
+					      tbl_type,
+					      fid,
+					      &res_parms);
+	}
+
+	/* Free the Flow ID since we've removed all resources */
+	rc = ulp_flow_db_fid_free(ulp_ctx, tbl_type, fid);
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+{
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	return ulp_mapper_resources_free(ulp_ctx,
+					 fid,
+					 BNXT_ULP_REGULAR_FLOW_TABLE);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 2221e12..8655728 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -38,4 +38,17 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/* Function that frees all resources associated with the flow. */
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+
+/*
+ * Function that frees all resources and can be called on default or regular
+ * flows
+ */
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type);
+
 #endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 837064e..566668e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -135,7 +135,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_max,
 		    mark_tbl->gfid_mask);
 
-	/* Add the mart tbl to the ulp context. */
+	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 
 	return 0;
@@ -195,3 +195,24 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 {
 	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
 }
+
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark  __rte_unused)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, ULP_MARK_INVALID);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 18abea4..f0d1515 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -72,4 +72,22 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		     uint32_t gfid,
 		     uint32_t mark);
 
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 22/34] net/bnxt: add support to alloc and program key and act tbls
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (20 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
                       ` (12 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets the action tables information from the action template id
2. Gets the class tables information from the class template id
3. Initializes the registry file
4. Allocates a flow id from the flow table
5. Process the class & action tables

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |  37 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  13 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 196 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  15 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  90 ++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  31 +++-
 7 files changed, 378 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index eecee6b..ee703a1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -303,6 +303,43 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 }
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	*fid = 0; /* Initialize fid to invalid value */
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+	/* check for max flows */
+	if (flow_tbl->num_flows <= flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "Flow database has reached max flows\n");
+		return -ENOMEM;
+	}
+	*fid = flow_tbl->flow_tbl_stack[flow_tbl->head_index];
+	flow_tbl->head_index++;
+	ulp_flow_db_active_flow_set(flow_tbl, *fid, 1);
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index 20109b9..eb5effa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -84,6 +84,19 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid);
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 1b22720..f697f2f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -16,12 +16,6 @@
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 
-int32_t
-ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
-int32_t
-ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
 /*
  * Get the size of the action property for a given index.
  *
@@ -38,7 +32,76 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 }
 
 /*
- * Get the list of result fields that implement the flow action
+ * Get the list of result fields that implement the flow action.
+ * Gets a device dependent list of tables that implement the action template id.
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The action template id that matches the flow
+ *
+ * num_tbls [out] The number of action tables in the returned array
+ *
+ * Returns An array of action tables to implement the flow, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_act_tbl_info *
+ulp_mapper_action_tbl_list_get(uint32_t dev_id,
+			       uint32_t tid,
+			       uint32_t *num_tbls)
+{
+	uint32_t	idx;
+	uint32_t	tidx;
+
+	if (!num_tbls) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+
+	/* template shift and device mask */
+	tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and act_tid
+	 */
+	idx		= ulp_act_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_act_tmpl_list[tidx].num_tbls;
+
+	return &ulp_act_tbl_list[idx];
+}
+
+/** Get a list of classifier tables that implement the flow
+ * Gets a device dependent list of tables that implement the class template id
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The template id that matches the flow
+ *
+ * num_tbls [out] The number of classifier tables in the returned array
+ *
+ * returns An array of classifier tables to implement the flow, or NULL on
+ * error
+ */
+static struct bnxt_ulp_mapper_class_tbl_info *
+ulp_mapper_class_tbl_list_get(uint32_t dev_id,
+			      uint32_t tid,
+			      uint32_t *num_tbls)
+{
+	uint32_t idx;
+	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	if (!num_tbls)
+		return NULL;
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and tid
+	 */
+	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
+
+	return &ulp_class_tbl_list[idx];
+}
+
+/*
+ * Get the list of key fields that implement the flow.
  *
  * ctxt [in] The ulp context
  *
@@ -46,7 +109,7 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
  *
  * num_flds [out] The number of key fields in the returned array
  *
- * returns array of Key fields, or NULL on error
+ * Returns array of Key fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_class_key_field_info *
 ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1147,7 +1210,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
  */
-int32_t
+static int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1169,7 +1232,7 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 }
 
 /* Create the classifier table entries for a flow. */
-int32_t
+static int32_t
 ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1324,3 +1387,116 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 fid,
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
+
+/* Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
+		       uint32_t app_priority __rte_unused,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap __rte_unused,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act_bitmap,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t class_tid,
+		       uint32_t act_tid,
+		       uint32_t *flow_id)
+{
+	struct ulp_regfile		regfile;
+	struct bnxt_ulp_mapper_parms	parms;
+	struct bnxt_ulp_device_params	*device_params;
+	int32_t				rc, trc;
+
+	/* Initialize the parms structure */
+	memset(&parms, 0, sizeof(parms));
+	parms.act_prop = act_prop;
+	parms.act_bitmap = act_bitmap;
+	parms.regfile = &regfile;
+	parms.hdr_field = hdr_field;
+	parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	parms.ulp_ctx = ulp_ctx;
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	/* Get the action table entry from device id and act context id */
+	parms.act_tid = act_tid;
+	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+						     parms.act_tid,
+						     &parms.num_atbls);
+	if (!parms.atbls || !parms.num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		return -EINVAL;
+	}
+
+	/* Get the class table entry from device id and act context id */
+	parms.class_tid = class_tid;
+	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+						    parms.class_tid,
+						    &parms.num_ctbls);
+	if (!parms.ctbls || !parms.num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+
+	/* Get the byte order for the further processing from device params */
+	device_params = bnxt_ulp_device_params_get(parms.dev_id);
+	if (!device_params) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+	parms.order = device_params->byte_order;
+	parms.encap_byte_swap = device_params->encap_byte_swap;
+
+	/* initialize the registry file for further processing */
+	if (!ulp_regfile_init(parms.regfile)) {
+		BNXT_TF_DBG(ERR, "regfile initialization failed.\n");
+		return -EINVAL;
+	}
+
+	/* Allocate a Flow ID for attaching all resources for the flow to.
+	 * Once allocated, all errors have to walk the list of resources and
+	 * free each of them.
+	 */
+	rc = ulp_flow_db_fid_alloc(ulp_ctx,
+				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   &parms.fid);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n");
+		return rc;
+	}
+
+	/* Process the action template list from the selected action table*/
+	rc = ulp_mapper_action_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		goto flow_error;
+	}
+
+	/* All good. Now process the class template */
+	rc = ulp_mapper_class_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "class tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		goto flow_error;
+	}
+
+	*flow_id = parms.fid;
+
+	return rc;
+
+flow_error:
+	/* Free all resources that were allocated during flow creation */
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 8655728..5f3d46e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -38,6 +38,21 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/*
+ * Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
+		       uint32_t		app_priority,
+		       struct ulp_rte_hdr_bitmap  *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t		class_tid,
+		       uint32_t		act_tid,
+		       uint32_t		*flow_id);
+
 /* Function that frees all resources associated with the flow. */
 int32_t
 ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index aefece8..9d52937 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -110,6 +110,74 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_class_tbl_info ulp_class_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 0,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.ident_start_idx = 0,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 13,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.ident_start_idx = 1,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.table_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 55,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 197,
+	.key_num_fields = 11,
+	.result_start_idx = 21,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_YES,
+	.critical_resource = 1,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	}
+};
+
 struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
 	{
 	.field_bit_size = 12,
@@ -938,6 +1006,28 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_act_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.table_type = TF_TBL_TYPE_EXT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 733836a..957b21a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -12,6 +12,7 @@
 #define ULP_TEMPLATE_DB_H_
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
+#define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -127,6 +128,12 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_search_before_alloc {
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_NO = 0,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_YES = 1,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_LAST = 2
+};
+
 enum bnxt_ulp_spec_opc {
 	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index e28d049..b7094c5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,10 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+struct ulp_rte_hdr_bitmap {
+	uint64_t	bits;
+};
+
 /* Structure to store the protocol fields */
 #define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
 struct ulp_rte_hdr_field {
@@ -51,6 +55,13 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+/* Flow Mapper */
+struct bnxt_ulp_mapper_tbl_list_info {
+	uint32_t	device_name;
+	uint32_t	start_tbl_idx;
+	uint32_t	num_tbls;
+};
+
 struct bnxt_ulp_mapper_class_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t	table_type;
@@ -132,7 +143,25 @@ struct bnxt_ulp_mapper_ident_info {
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
- * The ulp_data_field_list provides the instructions for creating an action
+ * The ulp_class_tmpl_list and ulp_act_tmpl_list are indexed by the dev_id
+ * and template id (either class or action) returned by the matcher.
+ * The result provides the start index and number of entries in the connected
+ * ulp_class_tbl_list/ulp_act_tbl_list.
+ */
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_class_tmpl_list[];
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_act_tmpl_list[];
+
+/*
+ * The ulp_class_tbl_list and ulp_act_tbl_list are indexed based on the results
+ * of the template lists.  Each entry describes the high level details of the
+ * table entry to include the start index and number of instructions in the
+ * field lists.
+ */
+extern struct bnxt_ulp_mapper_class_tbl_info	ulp_class_tbl_list[];
+extern struct bnxt_ulp_mapper_act_tbl_info	ulp_act_tbl_list[];
+
+/*
+ * The ulp_class_result_field_list provides the instructions for creating result
  * records such as tcam/em results.
  */
 extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 23/34] net/bnxt: match rte flow items with flow template patterns
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (21 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
                       ` (11 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes hdr_bitmap generated from the rte_flow_items
2. Iterates through the static hdr_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  12 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 152 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  26 +++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 115 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  40 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  21 ++++
 7 files changed, 367 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index f464d9e..455fd5c 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index 3516df4..e4ebfc5 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -25,6 +25,18 @@
 #define	BNXT_ULP_TX_NUM_FLOWS			32
 #define	BNXT_ULP_TX_TBL_IF_ID			0
 
+enum bnxt_tf_rc {
+	BNXT_TF_RC_PARSE_ERR	= -2,
+	BNXT_TF_RC_ERROR	= -1,
+	BNXT_TF_RC_SUCCESS	= 0
+};
+
+/* ulp direction Type */
+enum ulp_direction_type {
+	ULP_DIR_INGRESS,
+	ULP_DIR_EGRESS,
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
new file mode 100644
index 0000000..f367e4c
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_matcher.h"
+#include "ulp_utils.h"
+
+/* Utility function to check if bitmap is zero */
+static inline
+int ulp_field_mask_is_zero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is all ones */
+static inline int
+ulp_field_mask_is_ones(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0xFF)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is non zero */
+static inline int
+ulp_field_mask_notzero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 1;
+		bitmap++;
+	}
+	return 0;
+}
+
+/* Utility function to mask the computed and internal proto headers. */
+static void
+ulp_matcher_hdr_fields_normalize(struct ulp_rte_hdr_bitmap *hdr1,
+				 struct ulp_rte_hdr_bitmap *hdr2)
+{
+	/* copy the contents first */
+	rte_memcpy(hdr2, hdr1, sizeof(struct ulp_rte_hdr_bitmap));
+
+	/* reset the computed fields */
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_SVIF);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L4);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L4);
+}
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type   dir,
+			  struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			  struct ulp_rte_hdr_field  *hdr_field,
+			  struct ulp_rte_act_bitmap *act_bitmap,
+			  uint32_t		    *class_id)
+{
+	struct bnxt_ulp_header_match_info	*sel_hdr_match;
+	uint32_t				hdr_num, idx, jdx;
+	uint32_t				match = 0;
+	struct ulp_rte_hdr_bitmap		hdr_bitmap_masked;
+	uint32_t				start_idx;
+	struct ulp_rte_hdr_field		*m_field;
+	struct bnxt_ulp_matcher_field_info	*sf;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_hdr_match = ulp_ingress_hdr_match_list;
+		hdr_num = BNXT_ULP_INGRESS_HDR_MATCH_SZ;
+	} else {
+		sel_hdr_match = ulp_egress_hdr_match_list;
+		hdr_num = BNXT_ULP_EGRESS_HDR_MATCH_SZ;
+	}
+
+	/* Remove the hdr bit maps that are internal or computed */
+	ulp_matcher_hdr_fields_normalize(hdr_bitmap, &hdr_bitmap_masked);
+
+	/* Loop through the list of class templates to find the match */
+	for (idx = 0; idx < hdr_num; idx++, sel_hdr_match++) {
+		if (ULP_BITSET_CMP(&sel_hdr_match->hdr_bitmap,
+				   &hdr_bitmap_masked)) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Pattern Match failed template=%d\n",
+				    idx);
+			continue;
+		}
+		match = ULP_BITMAP_ISSET(act_bitmap->bits,
+					 BNXT_ULP_ACTION_BIT_VNIC);
+		if (match != sel_hdr_match->act_vnic) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Vnic Match failed template=%d\n",
+				    idx);
+			continue;
+		} else {
+			match = 1;
+		}
+
+		/* Found a matching hdr bitmap, match the fields next */
+		start_idx = sel_hdr_match->start_idx;
+		for (jdx = 0; jdx < sel_hdr_match->num_entries; jdx++) {
+			m_field = &hdr_field[jdx + BNXT_ULP_HDR_FIELD_LAST - 1];
+			sf = &ulp_field_match[start_idx + jdx];
+			switch (sf->mask_opcode) {
+			case BNXT_ULP_FMF_MASK_ANY:
+				match &= ulp_field_mask_is_zero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_EXACT:
+				match &= ulp_field_mask_is_ones(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_WILDCARD:
+				match &= ulp_field_mask_notzero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_IGNORE:
+			default:
+				break;
+			}
+			if (!match)
+				break;
+		}
+		if (match) {
+			BNXT_TF_DBG(DEBUG,
+				    "Found matching pattern template %d\n",
+				    sel_hdr_match->class_tmpl_id);
+			*class_id = sel_hdr_match->class_tmpl_id;
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching template\n");
+	*class_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
new file mode 100644
index 0000000..57a161d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef ULP_MATCHER_H_
+#define ULP_MATCHER_H_
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
+			  struct ulp_rte_hdr_bitmap	   *hdr_bitmap,
+			  struct ulp_rte_hdr_field	   *hdr_field,
+			  struct ulp_rte_act_bitmap	   *act_bitmap,
+			  uint32_t			   *class_id);
+
+#endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 9d52937..9fc4b08 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -798,6 +798,121 @@ struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
 	}
 };
 
+struct bnxt_ulp_header_match_info ulp_ingress_hdr_match_list[] = {
+	{
+	.hdr_bitmap = { .bits =
+		BNXT_ULP_HDR_BIT_O_ETH |
+		BNXT_ULP_HDR_BIT_O_IPV4 |
+		BNXT_ULP_HDR_BIT_O_UDP },
+	.start_idx = 0,
+	.num_entries = 24,
+	.class_tmpl_id = 0,
+	.act_vnic = 0
+	}
+};
+
+struct bnxt_ulp_header_match_info ulp_egress_hdr_match_list[] = {
+};
+
+struct bnxt_ulp_matcher_field_info ulp_field_match[] = {
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	{
 	.field_bit_size = 10,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 957b21a..319500a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,8 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
+#define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -45,6 +47,31 @@ enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
 };
 
+enum bnxt_ulp_hdr_bit {
+	BNXT_ULP_HDR_BIT_SVIF                = 0x0000000000000001,
+	BNXT_ULP_HDR_BIT_O_ETH               = 0x0000000000000002,
+	BNXT_ULP_HDR_BIT_OO_VLAN             = 0x0000000000000004,
+	BNXT_ULP_HDR_BIT_OI_VLAN             = 0x0000000000000008,
+	BNXT_ULP_HDR_BIT_O_L3                = 0x0000000000000010,
+	BNXT_ULP_HDR_BIT_O_IPV4              = 0x0000000000000020,
+	BNXT_ULP_HDR_BIT_O_IPV6              = 0x0000000000000040,
+	BNXT_ULP_HDR_BIT_O_L4                = 0x0000000000000080,
+	BNXT_ULP_HDR_BIT_O_TCP               = 0x0000000000000100,
+	BNXT_ULP_HDR_BIT_O_UDP               = 0x0000000000000200,
+	BNXT_ULP_HDR_BIT_T_VXLAN             = 0x0000000000000400,
+	BNXT_ULP_HDR_BIT_T_GRE               = 0x0000000000000800,
+	BNXT_ULP_HDR_BIT_I_ETH               = 0x0000000000001000,
+	BNXT_ULP_HDR_BIT_IO_VLAN             = 0x0000000000002000,
+	BNXT_ULP_HDR_BIT_II_VLAN             = 0x0000000000004000,
+	BNXT_ULP_HDR_BIT_I_L3                = 0x0000000000008000,
+	BNXT_ULP_HDR_BIT_I_IPV4              = 0x0000000000010000,
+	BNXT_ULP_HDR_BIT_I_IPV6              = 0x0000000000020000,
+	BNXT_ULP_HDR_BIT_I_L4                = 0x0000000000040000,
+	BNXT_ULP_HDR_BIT_I_TCP               = 0x0000000000080000,
+	BNXT_ULP_HDR_BIT_I_UDP               = 0x0000000000100000,
+	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -67,12 +94,25 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_fmf_spec {
+	BNXT_ULP_FMF_SPEC_IGNORE = 0,
+	BNXT_ULP_FMF_SPEC_LAST = 1
+};
+
 enum bnxt_ulp_mark_enable {
 	BNXT_ULP_MARK_ENABLE_NO = 0,
 	BNXT_ULP_MARK_ENABLE_YES = 1,
 	BNXT_ULP_MARK_ENABLE_LAST = 2
 };
 
+enum bnxt_ulp_hdr_field {
+	BNXT_ULP_HDR_FIELD_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HDR_FIELD_O_VTAG_NUM = 1,
+	BNXT_ULP_HDR_FIELD_I_VTAG_NUM = 2,
+	BNXT_ULP_HDR_FIELD_SVIF_INDEX = 3,
+	BNXT_ULP_HDR_FIELD_LAST = 4
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index b7094c5..dd06fb1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -29,6 +29,11 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+struct bnxt_ulp_matcher_field_info {
+	enum bnxt_ulp_fmf_mask	mask_opcode;
+	enum bnxt_ulp_fmf_spec	spec_opcode;
+};
+
 struct ulp_rte_act_bitmap {
 	uint64_t	bits;
 };
@@ -41,6 +46,22 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Matcher structures */
+struct bnxt_ulp_header_match_info {
+	struct ulp_rte_hdr_bitmap		hdr_bitmap;
+	uint32_t				start_idx;
+	uint32_t				num_entries;
+	uint32_t				class_tmpl_id;
+	uint32_t				act_vnic;
+};
+
+/* Flow Matcher templates Structure Array defined in template source*/
+extern struct bnxt_ulp_header_match_info  ulp_ingress_hdr_match_list[];
+extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
+
+/* Flow field match Information Structure Array defined in template source*/
+extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 24/34] net/bnxt: match rte flow actions with flow template actions
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (22 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
                       ` (10 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes act_bitmap generated from the rte_flow_actions
2. Iterates through the static act_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 36 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  9 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 12 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  2 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 10 ++++++++
 5 files changed, 69 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
index f367e4c..ec4121d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -150,3 +150,39 @@ ulp_matcher_pattern_match(enum ulp_direction_type   dir,
 	*class_id = 0;
 	return BNXT_TF_RC_ERROR;
 }
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type		dir,
+			 struct ulp_rte_act_bitmap		*act_bitmap,
+			 uint32_t				*act_id)
+{
+	struct bnxt_ulp_action_match_info	*sel_act_match;
+	uint32_t				act_num, idx;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_act_match = ulp_ingress_act_match_list;
+		act_num = BNXT_ULP_INGRESS_ACT_MATCH_SZ;
+	} else {
+		sel_act_match = ulp_egress_act_match_list;
+		act_num = BNXT_ULP_EGRESS_ACT_MATCH_SZ;
+	}
+
+	/* Loop through the list of action templates to find the match */
+	for (idx = 0; idx < act_num; idx++, sel_act_match++) {
+		if (!ULP_BITSET_CMP(&sel_act_match->act_bitmap,
+				    act_bitmap)) {
+			*act_id = sel_act_match->act_tmpl_id;
+			BNXT_TF_DBG(DEBUG, "Found matching act template %u\n",
+				    *act_id);
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching action template\n");
+	*act_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
index 57a161d..c818bbe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -23,4 +23,13 @@ ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
 			  struct ulp_rte_act_bitmap	   *act_bitmap,
 			  uint32_t			   *class_id);
 
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type	dir,
+			 struct ulp_rte_act_bitmap	*act_bitmap,
+			 uint32_t			*act_id);
+
 #endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 9fc4b08..5a5b1f1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -1121,6 +1121,18 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_action_match_info ulp_ingress_act_match_list[] = {
+	{
+	.act_bitmap = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS },
+	.act_tmpl_id = 0
+	}
+};
+
+struct bnxt_ulp_action_match_info ulp_egress_act_match_list[] = {
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 319500a..f4850bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -15,6 +15,8 @@
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
 #define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
+#define BNXT_ULP_INGRESS_ACT_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_ACT_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index dd06fb1..0e811ec 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -62,6 +62,16 @@ extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
 /* Flow field match Information Structure Array defined in template source*/
 extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
 
+/* Flow Matcher Action structures */
+struct bnxt_ulp_action_match_info {
+	struct ulp_rte_act_bitmap		act_bitmap;
+	uint32_t				act_tmpl_id;
+};
+
+/* Flow Matcher templates Structure Array defined in template source */
+extern struct bnxt_ulp_action_match_info  ulp_ingress_act_match_list[];
+extern struct bnxt_ulp_action_match_info  ulp_egress_act_match_list[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 25/34] net/bnxt: add support for rte flow item parsing
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (23 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
                       ` (9 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

1. Registers a callback handler for each rte_flow_item type, if it
   is supported
2. Iterates through each rte_flow_item till RTE_FLOW_ITEM_TYPE_END
3. Invokes the header call back handler
4. Each header call back handler will populate the respective fields
   in hdr_field & hdr_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 767 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      | 120 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 196 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  26 +
 6 files changed, 1117 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 455fd5c..5e2d751 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -64,6 +64,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_rte_parser.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
new file mode 100644
index 0000000..3ffdcbd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -0,0 +1,767 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_utils.h"
+#include "tfp.h"
+
+/* Inline Func to read integer that is stored in big endian format */
+static inline void ulp_util_field_int_read(uint8_t *buffer,
+					   uint32_t *val)
+{
+	uint32_t temp_val;
+
+	memcpy(&temp_val, buffer, sizeof(uint32_t));
+	*val = rte_be_to_cpu_32(temp_val);
+}
+
+/* Inline Func to write integer that is stored in big endian format */
+static inline void ulp_util_field_int_write(uint8_t *buffer,
+					    uint32_t val)
+{
+	uint32_t temp_val = rte_cpu_to_be_32(val);
+
+	memcpy(buffer, &temp_val, sizeof(uint32_t));
+}
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field *hdr_field)
+{
+	const struct rte_flow_item *item = pattern;
+	uint32_t field_idx = BNXT_ULP_HDR_FIELD_LAST;
+	uint32_t vlan_idx = 0;
+	struct bnxt_ulp_rte_hdr_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (item && item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_hdr_info[item->type];
+		if (hdr_info->hdr_type ==
+		    BNXT_ULP_HDR_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support type %d\n",
+				    item->type);
+			return BNXT_TF_RC_PARSE_ERR;
+		} else if (hdr_info->hdr_type ==
+			   BNXT_ULP_HDR_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_hdr_func) {
+				if (hdr_info->proto_hdr_func(item,
+							     hdr_bitmap,
+							     hdr_field,
+							     &field_idx,
+							     &vlan_idx) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+static int32_t
+ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			enum rte_flow_item_type proto,
+			uint32_t svif,
+			uint32_t mask)
+{
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF)) {
+		BNXT_TF_DBG(ERR,
+			    "SVIF already set,"
+			    " multiple sources not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* TBD: Check for any mapping errors for svif */
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_SVIF. */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	if (proto != RTE_FLOW_ITEM_TYPE_PF) {
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec,
+		       &svif, sizeof(svif));
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask,
+		       &mask, sizeof(mask));
+		hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	}
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, 0, 0);
+}
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field	 *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vf *vf_spec, *vf_mask;
+	uint32_t svif = 0, mask = 0;
+
+	vf_spec = item->spec;
+	vf_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields.
+	 */
+	if (vf_spec)
+		svif = vf_spec->id;
+	if (vf_mask)
+		mask = vf_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item port id  Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item *item,
+			    struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			    struct ulp_rte_hdr_field *hdr_field,
+			    uint32_t *field_idx __rte_unused,
+			    uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_port_id *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for Port into hdr_field using port id
+	 * header fields.
+	 */
+	if (port_spec)
+		svif = port_spec->id;
+	if (port_mask)
+		mask = port_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item phy port Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
+			     struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			     struct ulp_rte_hdr_field *hdr_field,
+			     uint32_t *field_idx __rte_unused,
+			     uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_phy_port *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/* Copy the rte_flow_item for phy port into hdr_field */
+	if (port_spec)
+		svif = port_spec->index;
+	if (port_mask)
+		mask = port_mask->index;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+	uint64_t set_flag = 0;
+
+	eth_spec = item->spec;
+	eth_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields
+	 */
+	if (eth_spec) {
+		hdr_field[idx].size = sizeof(eth_spec->dst.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->dst.addr_bytes,
+		       sizeof(eth_spec->dst.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->src.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->src.addr_bytes,
+		       sizeof(eth_spec->src.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->type);
+		memcpy(hdr_field[idx++].spec, &eth_spec->type,
+		       sizeof(eth_spec->type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_ETH_NUM;
+	}
+
+	if (eth_mask) {
+		memcpy(hdr_field[mdx++].mask, eth_mask->dst.addr_bytes,
+		       sizeof(eth_mask->dst.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, eth_mask->src.addr_bytes,
+		       sizeof(eth_mask->src.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, &eth_mask->type,
+		       sizeof(eth_mask->type));
+	}
+	/* Add number of vlan header elements */
+	*field_idx = idx + BNXT_ULP_PROTO_HDR_VLAN_NUM;
+	*vlan_idx = idx;
+
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_I_ETH */
+	set_flag = ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+	if (set_flag)
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+	else
+		ULP_BITMAP_RESET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+
+	/* update the hdr_bitmap with BNXT_ULP_HDR_PROTO_O_ETH */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	uint32_t idx = *vlan_idx;
+	uint32_t mdx = *vlan_idx;
+	uint16_t vlan_tag, priority;
+	uint32_t outer_vtag_num = 0, inner_vtag_num = 0;
+	uint8_t *outer_tag_buffer;
+	uint8_t *inner_tag_buffer;
+
+	vlan_spec = item->spec;
+	vlan_mask = item->mask;
+	outer_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].spec;
+	inner_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].spec;
+
+	/*
+	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
+	 * header fields
+	 */
+	if (vlan_spec) {
+		vlan_tag = ntohs(vlan_spec->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		hdr_field[idx].size = sizeof(priority);
+		memcpy(hdr_field[idx++].spec, &priority, sizeof(priority));
+		hdr_field[idx].size = sizeof(vlan_tag);
+		memcpy(hdr_field[idx++].spec, &vlan_tag, sizeof(vlan_tag));
+		hdr_field[idx].size = sizeof(vlan_spec->inner_type);
+		memcpy(hdr_field[idx++].spec, &vlan_spec->inner_type,
+		       sizeof(vlan_spec->inner_type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_S_VLAN_NUM;
+	}
+
+	if (vlan_mask) {
+		vlan_tag = ntohs(vlan_mask->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		memcpy(hdr_field[mdx++].mask, &priority, sizeof(priority));
+		memcpy(hdr_field[mdx++].mask, &vlan_tag, sizeof(vlan_tag));
+		memcpy(hdr_field[mdx++].mask, &vlan_mask->inner_type,
+		       sizeof(vlan_mask->inner_type));
+	}
+	/* Set the vlan index to new incremented value */
+	*vlan_idx = idx;
+
+	/* Get the outer tag and inner tag counts */
+	ulp_util_field_int_read(outer_tag_buffer, &outer_vtag_num);
+	ulp_util_field_int_read(inner_tag_buffer, &inner_vtag_num);
+
+	/* Update the hdr_bitmap of the vlans */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+	    !ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_OI_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_IO_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_IO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_II_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else {
+		BNXT_TF_DBG(ERR, "Error Parsing:Vlan hdr found withtout eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv4_spec = item->spec;
+	ipv4_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv4_spec) {
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.version_ihl);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.version_ihl,
+		       sizeof(ipv4_spec->hdr.version_ihl));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.type_of_service);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.type_of_service,
+		       sizeof(ipv4_spec->hdr.type_of_service));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.total_length);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.total_length,
+		       sizeof(ipv4_spec->hdr.total_length));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.packet_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.packet_id,
+		       sizeof(ipv4_spec->hdr.packet_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.fragment_offset);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.fragment_offset,
+		       sizeof(ipv4_spec->hdr.fragment_offset));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.time_to_live);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.time_to_live,
+		       sizeof(ipv4_spec->hdr.time_to_live));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.next_proto_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.next_proto_id,
+		       sizeof(ipv4_spec->hdr.next_proto_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.hdr_checksum);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.hdr_checksum,
+		       sizeof(ipv4_spec->hdr.hdr_checksum));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.src_addr,
+		       sizeof(ipv4_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.dst_addr,
+		       sizeof(ipv4_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV4_NUM;
+	}
+
+	if (ipv4_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.version_ihl,
+		       sizeof(ipv4_mask->hdr.version_ihl));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.type_of_service,
+		       sizeof(ipv4_mask->hdr.type_of_service));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.total_length,
+		       sizeof(ipv4_mask->hdr.total_length));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.packet_id,
+		       sizeof(ipv4_mask->hdr.packet_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.fragment_offset,
+		       sizeof(ipv4_mask->hdr.fragment_offset));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.time_to_live,
+		       sizeof(ipv4_mask->hdr.time_to_live));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.next_proto_id,
+		       sizeof(ipv4_mask->hdr.next_proto_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.hdr_checksum,
+		       sizeof(ipv4_mask->hdr.hdr_checksum));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.src_addr,
+		       sizeof(ipv4_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.dst_addr,
+		       sizeof(ipv4_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* Number of ipv4 header elements */
+
+	/* Set the ipv4 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv6_spec = item->spec;
+	ipv6_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error: 3'rd L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv6_spec) {
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.vtc_flow);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.vtc_flow,
+		       sizeof(ipv6_spec->hdr.vtc_flow));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.payload_len);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.payload_len,
+		       sizeof(ipv6_spec->hdr.payload_len));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.proto);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.proto,
+		       sizeof(ipv6_spec->hdr.proto));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.hop_limits);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.hop_limits,
+		       sizeof(ipv6_spec->hdr.hop_limits));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.src_addr,
+		       sizeof(ipv6_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.dst_addr,
+		       sizeof(ipv6_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV6_NUM;
+	}
+
+	if (ipv6_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.vtc_flow,
+		       sizeof(ipv6_mask->hdr.vtc_flow));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.payload_len,
+		       sizeof(ipv6_mask->hdr.payload_len));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.proto,
+		       sizeof(ipv6_mask->hdr.proto));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.hop_limits,
+		       sizeof(ipv6_mask->hdr.hop_limits));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.src_addr,
+		       sizeof(ipv6_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.dst_addr,
+		       sizeof(ipv6_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* add number of ipv6 header elements */
+
+	/* Set the ipv6 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	udp_spec = item->spec;
+	udp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Err:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (udp_spec) {
+		hdr_field[idx].size = sizeof(udp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.src_port,
+		       sizeof(udp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dst_port,
+		       sizeof(udp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_len);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_len,
+		       sizeof(udp_spec->hdr.dgram_len));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_cksum);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_cksum,
+		       sizeof(udp_spec->hdr.dgram_cksum));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_UDP_NUM;
+	}
+
+	if (udp_mask) {
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.src_port,
+		       sizeof(udp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dst_port,
+		       sizeof(udp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_len,
+		       sizeof(udp_mask->hdr.dgram_len));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_cksum,
+		       sizeof(udp_mask->hdr.dgram_cksum));
+	}
+	*field_idx = idx; /* Add number of UDP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	tcp_spec = item->spec;
+	tcp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (tcp_spec) {
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.src_port,
+		       sizeof(tcp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.dst_port,
+		       sizeof(tcp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.sent_seq);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.sent_seq,
+		       sizeof(tcp_spec->hdr.sent_seq));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.recv_ack);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.recv_ack,
+		       sizeof(tcp_spec->hdr.recv_ack));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.data_off);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.data_off,
+		       sizeof(tcp_spec->hdr.data_off));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_flags);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_flags,
+		       sizeof(tcp_spec->hdr.tcp_flags));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.rx_win);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.rx_win,
+		       sizeof(tcp_spec->hdr.rx_win));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.cksum);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.cksum,
+		       sizeof(tcp_spec->hdr.cksum));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_urp);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_urp,
+		       sizeof(tcp_spec->hdr.tcp_urp));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_TCP_NUM;
+	}
+
+	if (tcp_mask) {
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.src_port,
+		       sizeof(tcp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.dst_port,
+		       sizeof(tcp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.sent_seq,
+		       sizeof(tcp_mask->hdr.sent_seq));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.recv_ack,
+		       sizeof(tcp_mask->hdr.recv_ack));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.data_off,
+		       sizeof(tcp_mask->hdr.data_off));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_flags,
+		       sizeof(tcp_mask->hdr.tcp_flags));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.rx_win,
+		       sizeof(tcp_mask->hdr.rx_win));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.cksum,
+		       sizeof(tcp_mask->hdr.cksum));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_urp,
+		       sizeof(tcp_mask->hdr.tcp_urp));
+	}
+	*field_idx = idx; /* add number of TCP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
+			  struct ulp_rte_hdr_bitmap *hdrbitmap,
+			  struct ulp_rte_hdr_field *hdr_field,
+			  uint32_t *field_idx,
+			  uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	vxlan_spec = item->spec;
+	vxlan_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
+	 * header fields
+	 */
+	if (vxlan_spec) {
+		hdr_field[idx].size = sizeof(vxlan_spec->flags);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->flags,
+		       sizeof(vxlan_spec->flags));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd0);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd0,
+		       sizeof(vxlan_spec->rsvd0));
+		hdr_field[idx].size = sizeof(vxlan_spec->vni);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->vni,
+		       sizeof(vxlan_spec->vni));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd1);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd1,
+		       sizeof(vxlan_spec->rsvd1));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_VXLAN_NUM;
+	}
+
+	if (vxlan_mask) {
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->flags,
+		       sizeof(vxlan_mask->flags));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd0,
+		       sizeof(vxlan_mask->rsvd0));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->vni,
+		       sizeof(vxlan_mask->vni));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd1,
+		       sizeof(vxlan_mask->rsvd1));
+	}
+	*field_idx = idx; /* Add number of vxlan header elements */
+
+	/* Update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(hdrbitmap->bits, BNXT_ULP_HDR_BIT_T_VXLAN);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item void Header */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
+			 struct ulp_rte_hdr_bitmap *hdr_bit __rte_unused,
+			 struct ulp_rte_hdr_field *hdr_field __rte_unused,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
new file mode 100644
index 0000000..3a7845d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_RTE_PARSER_H_
+#define _ULP_RTE_PARSER_H_
+
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field  *hdr_field);
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
+			    struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			    struct ulp_rte_hdr_field	*hdr_field,
+			    uint32_t			*field_idx,
+			    uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
+			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			     struct ulp_rte_hdr_field	*hdr_field,
+			     uint32_t			*field_idx,
+			     uint32_t			*vlan_idx);
+
+/* Function to handle the RTE item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header. */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item	*item,
+			  struct ulp_rte_hdr_bitmap	*hdrbitmap,
+			  struct ulp_rte_hdr_field	*hdr_field,
+			  uint32_t			*field_idx,
+			  uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item void Header. */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+#endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 5a5b1f1..6c214b2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,7 @@
 #include "ulp_template_db.h"
 #include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
+#include "ulp_rte_parser.h"
 
 uint32_t ulp_act_prop_map_table[] = {
 	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
@@ -110,6 +111,201 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
+	[RTE_FLOW_ITEM_TYPE_END] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_END,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VOID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_void_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_INVERT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ANY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_pf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PHY_PORT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_phy_port_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PORT_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_port_id_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_RAW] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_eth_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv4_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv6_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_UDP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_udp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_TCP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_tcp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_SCTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vxlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_E_TAG] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NVGRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MPLS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_FUZZY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPU] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ESP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6_EXT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NA] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MARK] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_META] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP_PSC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOES] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOED] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NSH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IGMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_AH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_HIGIG2] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	}
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index f4850bf..906b542 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -115,6 +115,13 @@ enum bnxt_ulp_hdr_field {
 	BNXT_ULP_HDR_FIELD_LAST = 4
 };
 
+enum bnxt_ulp_hdr_type {
+	BNXT_ULP_HDR_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_HDR_TYPE_SUPPORTED = 1,
+	BNXT_ULP_HDR_TYPE_END = 2,
+	BNXT_ULP_HDR_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0e811ec..0699634 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,18 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Number of fields for each protocol */
+#define BNXT_ULP_PROTO_HDR_SVIF_NUM	1
+#define BNXT_ULP_PROTO_HDR_ETH_NUM	3
+#define BNXT_ULP_PROTO_HDR_S_VLAN_NUM	3
+#define BNXT_ULP_PROTO_HDR_VLAN_NUM	6
+#define BNXT_ULP_PROTO_HDR_IPV4_NUM	10
+#define BNXT_ULP_PROTO_HDR_IPV6_NUM	6
+#define BNXT_ULP_PROTO_HDR_UDP_NUM	4
+#define BNXT_ULP_PROTO_HDR_TCP_NUM	9
+#define BNXT_ULP_PROTO_HDR_VXLAN_NUM	4
+#define BNXT_ULP_PROTO_HDR_MAX		128
+
 struct ulp_rte_hdr_bitmap {
 	uint64_t	bits;
 };
@@ -29,6 +41,20 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+/* Flow Parser Header Information Structure */
+struct bnxt_ulp_rte_hdr_info {
+	enum bnxt_ulp_hdr_type					hdr_type;
+	/* Flow Parser Protocol Header Function Prototype */
+	int (*proto_hdr_func)(const struct rte_flow_item	*item_list,
+			      struct ulp_rte_hdr_bitmap		*hdr_bitmap,
+			      struct ulp_rte_hdr_field		*hdr_field,
+			      uint32_t				*field_idx,
+			      uint32_t				*vlan_idx);
+};
+
+/* Flow Parser Header Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_hdr_info	ulp_hdr_info[];
+
 struct bnxt_ulp_matcher_field_info {
 	enum bnxt_ulp_fmf_mask	mask_opcode;
 	enum bnxt_ulp_fmf_spec	spec_opcode;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 26/34] net/bnxt: add support for rte flow action parsing
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (24 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
                       ` (8 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Registers a callback handler for each rte_flow_action type, if
   it is supported
2. Iterates through each rte_flow_action till RTE_FLOW_ACTION_TYPE_END
3. Invokes the action call back handler
4. Each action call back handler will populate the respective fields in
   act_details & act_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   7 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 441 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      |  85 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 199 ++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  13 +
 6 files changed, 751 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index e4ebfc5..f417579 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -31,6 +31,13 @@ enum bnxt_tf_rc {
 	BNXT_TF_RC_SUCCESS	= 0
 };
 
+/* eth IPv4 Type */
+enum bnxt_ulp_eth_ip_type {
+	BNXT_ULP_ETH_IPV4 = 4,
+	BNXT_ULP_ETH_IPV6 = 5,
+	BNXT_ULP_MAX_ETH_IP_TYPE = 0
+};
+
 /* ulp direction Type */
 enum ulp_direction_type {
 	ULP_DIR_INGRESS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 3ffdcbd..7a31b43 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -30,6 +30,21 @@ static inline void ulp_util_field_int_write(uint8_t *buffer,
 	memcpy(buffer, &temp_val, sizeof(uint32_t));
 }
 
+/* Utility function to skip the void items. */
+static inline int32_t
+ulp_rte_item_skip_void(const struct rte_flow_item **item, uint32_t increment)
+{
+	if (!*item)
+		return 0;
+	if (increment)
+		(*item)++;
+	while ((*item) && (*item)->type == RTE_FLOW_ITEM_TYPE_VOID)
+		(*item)++;
+	if (*item)
+		return 1;
+	return 0;
+}
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -73,6 +88,45 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 	return BNXT_TF_RC_SUCCESS;
 }
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
+			      struct ulp_rte_act_bitmap *act_bitmap,
+			      struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action *action_item = actions;
+	struct bnxt_ulp_rte_act_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (action_item && action_item->type != RTE_FLOW_ACTION_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_act_info[action_item->type];
+		if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support act %u\n",
+				    action_item->type);
+			return BNXT_TF_RC_ERROR;
+		} else if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_act_func) {
+				if (hdr_info->proto_act_func(action_item,
+							     act_bitmap,
+							     act_prop) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		action_item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 static int32_t
 ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
@@ -765,3 +819,390 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
 {
 	return BNXT_TF_RC_SUCCESS;
 }
+
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act __rte_unused,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action *action_item,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_mark *mark;
+	uint32_t mark_id = 0;
+
+	mark = action_item->conf;
+	if (mark) {
+		mark_id = tfp_cpu_to_be_32(mark->id);
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       &mark_id, BNXT_ULP_ACT_PROP_SZ_MARK);
+
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_MARK);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Mark arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action *action_item,
+			struct ulp_rte_act_bitmap *act,
+			struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	const struct rte_flow_action_rss *rss;
+
+	rss = action_item->conf;
+	if (rss) {
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_RSS);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: RSS arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *ap)
+{
+	const struct rte_flow_action_vxlan_encap *vxlan_encap;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv6 *ipv6_spec;
+	struct rte_flow_item_vxlan vxlan_spec;
+	uint32_t vlan_num = 0, vlan_size = 0;
+	uint32_t ip_size = 0, ip_type = 0;
+	uint32_t vxlan_size = 0;
+	uint8_t *buff;
+	/* IP header per byte - ver/hlen, TOS, ID, ID, FRAG, FRAG, TTL, PROTO */
+	const uint8_t	def_ipv4_hdr[] = {0x45, 0x00, 0x00, 0x01, 0x00,
+				    0x00, 0x40, 0x11};
+
+	vxlan_encap = action_item->conf;
+	if (!vxlan_encap) {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan_encap arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	item = vxlan_encap->definition;
+	if (!item) {
+		BNXT_TF_DBG(ERR, "Parse Error: definition arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!ulp_rte_item_skip_void(&item, 0))
+		return BNXT_TF_RC_ERROR;
+
+	/* must have ethernet header */
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		BNXT_TF_DBG(ERR, "Parse Error:vxlan encap does not have eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	eth_spec = item->spec;
+	buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC];
+	ulp_encap_buffer_copy(buff,
+			      eth_spec->dst.addr_bytes,
+			      BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC);
+
+	/* Goto the next item */
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* May have vlan header */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG];
+		ulp_encap_buffer_copy(buff,
+				      item->spec,
+				      sizeof(struct rte_flow_item_vlan));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+
+	/* may have two vlan headers */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG +
+		       sizeof(struct rte_flow_item_vlan)],
+		       item->spec,
+		       sizeof(struct rte_flow_item_vlan));
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+	/* Update the vlan count and size of more than one */
+	if (vlan_num) {
+		vlan_size = vlan_num * sizeof(struct rte_flow_item_vlan);
+		vlan_num = tfp_cpu_to_be_32(vlan_num);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM],
+		       &vlan_num,
+		       sizeof(uint32_t));
+		vlan_size = tfp_cpu_to_be_32(vlan_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ],
+		       &vlan_size,
+		       sizeof(uint32_t));
+	}
+
+	/* L3 must be IPv4, IPv6 */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		ipv4_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV4_SIZE;
+
+		/* copy the ipv4 details */
+		if (ulp_buffer_is_empty(&ipv4_spec->hdr.version_ihl,
+					BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS)) {
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      def_ipv4_hdr,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		} else {
+			const uint8_t *tmp_buff;
+
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      &ipv4_spec->hdr.version_ihl,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS);
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+			     BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS];
+			tmp_buff = (const uint8_t *)&ipv4_spec->hdr.packet_id;
+			ulp_encap_buffer_copy(buff,
+					      tmp_buff,
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		}
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+		    BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+		    BNXT_ULP_ENCAP_IPV4_ID_PROTO];
+		ulp_encap_buffer_copy(buff,
+				      (const uint8_t *)&ipv4_spec->hdr.dst_addr,
+				      BNXT_ULP_ENCAP_IPV4_DEST_IP);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		/* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV4);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		ipv6_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV6_SIZE;
+
+		/* copy the ipv4 details */
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP],
+		       ipv6_spec, BNXT_ULP_ENCAP_IPV6_SIZE);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		 /* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV6);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan Encap expects L3 hdr\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* L4 is UDP */
+	if (item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have udp\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	/* copy the udp details */
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP],
+			      item->spec, BNXT_ULP_ENCAP_UDP_SIZE);
+
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* Finally VXLAN */
+	if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have vni\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	vxlan_size = sizeof(struct rte_flow_item_vxlan);
+	/* copy the vxlan details */
+	memcpy(&vxlan_spec, item->spec, vxlan_size);
+	vxlan_spec.flags = 0x08;
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN],
+			      (const uint8_t *)&vxlan_spec,
+			      vxlan_size);
+	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
+	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
+	       &vxlan_size, sizeof(uint32_t));
+
+	/*update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_ENCAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action *action_item
+				__rte_unused,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_DECAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* Update the hdr_bitmap with drop */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_DROP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action *action_item,
+			  struct ulp_rte_act_bitmap *act,
+			  struct ulp_rte_act_prop *act_prop __rte_unused)
+
+{
+	const struct rte_flow_action_count *act_count;
+
+	act_count = action_item->conf;
+	if (act_count) {
+		if (act_count->shared) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:Shared count not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_COUNT],
+		       &act_count->id,
+		       BNXT_ULP_ACT_PROP_SZ_COUNT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_COUNT);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	uint8_t *svif_buf;
+	uint8_t *vnic_buffer;
+	uint32_t svif;
+
+	/* Update the hdr_bitmap with vnic bit */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+
+	/* copy the PF of the current device into VNIC Property */
+	svif_buf = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_read(svif_buf, &svif);
+	vnic_buffer = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_write(vnic_buffer, svif);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_vf *vf_action;
+
+	vf_action = action_item->conf;
+	if (vf_action) {
+		if (vf_action->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:VF Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using VF conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &vf_action->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
+			    struct ulp_rte_act_bitmap *act,
+			    struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_port_id *port_id;
+
+	port_id = act_item->conf;
+	if (port_id) {
+		if (port_id->original) {
+			BNXT_TF_DBG(ERR,
+				    "ParseErr:Portid Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using port conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &port_id->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
+			     struct ulp_rte_act_bitmap *act,
+			     struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_phy_port *phy_port;
+
+	phy_port = action_item->conf;
+	if (phy_port) {
+		if (phy_port->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Err:Port Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &phy_port->index,
+		       BNXT_ULP_ACT_PROP_SZ_VPORT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VPORT);
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 3a7845d..0ab43d2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -12,6 +12,14 @@
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+/* defines to be used in the tunnel header parsing */
+#define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS	2
+#define BNXT_ULP_ENCAP_IPV4_ID_PROTO		6
+#define BNXT_ULP_ENCAP_IPV4_DEST_IP		4
+#define BNXT_ULP_ENCAP_IPV4_SIZE		12
+#define BNXT_ULP_ENCAP_IPV6_SIZE		8
+#define BNXT_ULP_ENCAP_UDP_SIZE			4
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -21,6 +29,15 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
 			      struct ulp_rte_hdr_field  *hdr_field);
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action	actions[],
+			      struct ulp_rte_act_bitmap		*act_bitmap,
+			      struct ulp_rte_act_prop		*act_prop);
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 int32_t
 ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
@@ -45,7 +62,7 @@ ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
 			    uint32_t			*field_idx,
 			    uint32_t			*vlan_idx);
 
-/* Function to handle the parsing of RTE Flow item port id Header. */
+/* Function to handle the parsing of RTE Flow item port Header. */
 int32_t
 ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
 			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
@@ -117,4 +134,70 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
 			 uint32_t			*field_idx,
 			 uint32_t			*vlan_idx);
 
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action	*action_item,
+			struct ulp_rte_act_bitmap	*act,
+			struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action	*action_item,
+			  struct ulp_rte_act_bitmap	*act,
+			  struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action	*act_item,
+			    struct ulp_rte_act_bitmap		*act,
+			    struct ulp_rte_act_prop		*act_p);
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action	*action_item,
+			     struct ulp_rte_act_bitmap		*act,
+			     struct ulp_rte_act_prop		*act_prop);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 6c214b2..411f1e3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -96,6 +96,205 @@ uint32_t ulp_act_prop_map_table[] = {
 		BNXT_ULP_ACT_PROP_SZ_LAST
 };
 
+struct bnxt_ulp_rte_act_info ulp_act_info[] = {
+	[RTE_FLOW_ACTION_TYPE_END] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_END,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VOID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_void_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PASSTHRU] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_JUMP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MARK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_mark_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_FLAG] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_QUEUE] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DROP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_drop_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_COUNT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_count_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_RSS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_rss_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_pf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PHY_PORT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_phy_port_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PORT_ID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_port_id_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_METER] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SECURITY] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_OUT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_IN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_encap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_decap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MAC_SWAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	}
+};
+
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
 		.global_fid_enable       = BNXT_ULP_SYM_YES,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 906b542..dfab266 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -74,6 +74,13 @@ enum bnxt_ulp_hdr_bit {
 	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
 };
 
+enum bnxt_ulp_act_type {
+	BNXT_ULP_ACT_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_ACT_TYPE_SUPPORTED = 1,
+	BNXT_ULP_ACT_TYPE_END = 2,
+	BNXT_ULP_ACT_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0699634..47c0dd8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -72,6 +72,19 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Parser Action Information Structure */
+struct bnxt_ulp_rte_act_info {
+	enum bnxt_ulp_act_type					act_type;
+	/* Flow Parser Protocol Action Function Prototype */
+	int32_t (*proto_act_func)
+		(const struct rte_flow_action			*action_item,
+		struct ulp_rte_act_bitmap			*act_bitmap,
+		struct ulp_rte_act_prop				*act_prop);
+};
+
+/* Flow Parser Action Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_act_info	ulp_act_info[];
+
 /* Flow Matcher structures */
 struct bnxt_ulp_header_match_info {
 	struct ulp_rte_hdr_bitmap		hdr_bitmap;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 27/34] net/bnxt: add support for rte flow create driver hook
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (25 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
                       ` (7 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, calls ulp_mapper_flow_create to program
   key & action tables

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 177 ++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 5e2d751..5ed33cc 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_rte_parser.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp_flow.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
new file mode 100644
index 0000000..6402dd3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_matcher.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+#include <rte_malloc.h>
+
+static int32_t
+bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item pattern[],
+			    const struct rte_flow_action actions[],
+			    struct rte_flow_error *error)
+{
+	/* Perform the validation of the arguments for null */
+	if (!error)
+		return BNXT_TF_RC_ERROR;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL,
+				   "NULL pattern.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL,
+				   "NULL action.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL,
+				   "NULL attribute.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (attr->egress && attr->ingress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   attr,
+				   "EGRESS AND INGRESS UNSUPPORTED");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to create the rte flow. */
+static struct rte_flow *
+bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
+		     const struct rte_flow_attr		*attr,
+		     const struct rte_flow_item		pattern[],
+		     const struct rte_flow_action	actions[],
+		     struct rte_flow_error		*error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	uint32_t app_priority;
+	int ret;
+	struct bnxt_ulp_context *ulp_ctx = NULL;
+	uint32_t vnic;
+	uint8_t svif;
+	struct rte_flow *flow_id;
+	uint32_t fid;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return NULL;
+	}
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return NULL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	svif = bnxt_get_svif(dev->data->port_id, false);
+	BNXT_TF_DBG(ERR, "SVIF for port[%d][port]=0x%08x\n",
+		    dev->data->port_id, svif);
+
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec[0] = svif;
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask[0] = -1;
+	ULP_BITMAP_SET(hdr_bitmap.bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	/*
+	 * VNIC is being pushed as 32bit and the pop will take care of
+	 * proper size
+	 */
+	vnic = (uint32_t)bnxt_get_vnic_id(dev->data->port_id);
+	vnic = htonl(vnic);
+	rte_memcpy(&act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		   &vnic, BNXT_ULP_ACT_PROP_SZ_VNIC);
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	app_priority = attr->priority;
+	/* call the ulp mapper to create the flow in the hardware */
+	ret = ulp_mapper_flow_create(ulp_ctx,
+				     app_priority,
+				     &hdr_bitmap,
+				     hdr_field,
+				     &act_bitmap,
+				     &act_prop,
+				     class_id,
+				     act_tmpl,
+				     &fid);
+	if (!ret) {
+		flow_id = (struct rte_flow *)((uintptr_t)fid);
+		return flow_id;
+	}
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to create flow.");
+	return NULL;
+}
+
+const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
+	.validate = NULL,
+	.create = bnxt_ulp_flow_create,
+	.destroy = NULL,
+	.flush = NULL,
+	.query = NULL,
+	.isolate = NULL
+};
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 28/34] net/bnxt: add support for rte flow validate driver hook
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (26 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
                       ` (6 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, returns success otherwise failure

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 67 ++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6402dd3..490b2ba 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -167,8 +167,73 @@ bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
 	return NULL;
 }
 
+/* Function to validate the rte flow. */
+static int
+bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
+		       const struct rte_flow_attr *attr,
+		       const struct rte_flow_item pattern[],
+		       const struct rte_flow_action actions[],
+		       struct rte_flow_error *error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	int ret;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return -EINVAL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* all good return success */
+	return ret;
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to validate flow.");
+	return -EINVAL;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
-	.validate = NULL,
+	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = NULL,
 	.flush = NULL,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 29/34] net/bnxt: add support for rte flow destroy driver hook
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (27 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
                       ` (5 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the flow associated with the flow id from the flow table
3. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with that flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 490b2ba..35099a3 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -232,10 +232,40 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
 	return -EINVAL;
 }
 
+/* Function to destroy the rte flow. */
+static int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
+		      struct rte_flow *flow,
+		      struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	uint32_t fid;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+		return -EINVAL;
+	}
+
+	fid = (uint32_t)(uintptr_t)flow;
+
+	ret = ulp_mapper_flow_destroy(ulp_ctx, fid);
+	if (ret)
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
-	.destroy = NULL,
+	.destroy = bnxt_ulp_flow_destroy,
 	.flush = NULL,
 	.query = NULL,
 	.isolate = NULL
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 30/34] net/bnxt: add support for rte flow flush driver hook
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (28 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
                       ` (4 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the rte_flow table associated with this session
3. Iterates through all the flows in the flow table
4. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with each flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c      |  3 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 33 +++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c   | 69 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h   | 11 ++++++
 4 files changed, 115 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 3795c6d..56e08f2 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -517,6 +517,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	if (!session)
 		return;
 
+	/* clean up regular flows */
+	ulp_flow_db_flush_flows(&bp->ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+
 	/* cleanup the eem table scope */
 	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 35099a3..4958895 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -262,11 +262,42 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Function to destroy the rte flows. */
+static int32_t
+bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
+		    struct rte_flow_error *error)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int32_t ret;
+	struct bnxt *bp;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+		return -EINVAL;
+	}
+	bp = eth_dev->data->dev_private;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return 0;
+
+	ret = ulp_flow_db_flush_flows(ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
-	.flush = NULL,
+	.flush = bnxt_ulp_flow_flush,
 	.query = NULL,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index ee703a1..aed5078 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -555,3 +555,72 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/** Get the flow database entry iteratively
+ *
+ * flow_tbl [in] Ptr to flow table
+ * fid [in/out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+static int32_t
+ulp_flow_db_next_entry_get(struct bnxt_ulp_flow_tbl	*flowtbl,
+			   uint32_t			*fid)
+{
+	uint32_t	lfid = *fid;
+	uint32_t	idx;
+	uint64_t	bs;
+
+	do {
+		lfid++;
+		if (lfid >= flowtbl->num_flows)
+			return -ENOENT;
+		idx = lfid / ULP_INDEX_BITMAP_SIZE;
+		while (!(bs = flowtbl->active_flow_tbl[idx])) {
+			idx++;
+			if ((idx * ULP_INDEX_BITMAP_SIZE) >= flowtbl->num_flows)
+				return -ENOENT;
+		}
+		lfid = (idx * ULP_INDEX_BITMAP_SIZE) + __builtin_clzl(bs);
+		if (*fid >= lfid) {
+			BNXT_TF_DBG(ERR, "Flow Database is corrupt\n");
+			return -ENOENT;
+		}
+	} while (!ulp_flow_db_active_flow_is_set(flowtbl, lfid));
+
+	/* all good, return success */
+	*fid = lfid;
+	return 0;
+}
+
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx)
+{
+	uint32_t			fid = 0;
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid Argument\n");
+		return -EINVAL;
+	}
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Flow database not found\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[idx];
+	while (!ulp_flow_db_next_entry_get(flow_tbl, &fid))
+		(void)ulp_mapper_resources_free(ulp_ctx, fid, idx);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index eb5effa..5435415 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -142,4 +142,15 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 			     enum bnxt_ulp_flow_db_tables	tbl_idx,
 			     uint32_t				fid);
 
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx);
+
 #endif /* _ULP_FLOW_DB_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 31/34] net/bnxt: register tf rte flow ops
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (29 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
                       ` (3 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Register bnxt_ulp_rte_flow_ops when host based TRUFLOW is
enabled.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 1 +
 drivers/net/bnxt/bnxt_ethdev.c | 6 +++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index cd20740..a70cdff 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -731,6 +731,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2f08921..783e6a4 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3288,6 +3288,7 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 		    enum rte_filter_type filter_type,
 		    enum rte_filter_op filter_op, void *arg)
 {
+	struct bnxt *bp = dev->data->dev_private;
 	int ret = 0;
 
 	ret = is_bnxt_in_error(dev->data->dev_private);
@@ -3311,7 +3312,10 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_GENERIC:
 		if (filter_op != RTE_ETH_FILTER_GET)
 			return -EINVAL;
-		*(const void **)arg = &bnxt_flow_ops;
+		if (bp->truflow)
+			*(const void **)arg = &bnxt_ulp_rte_flow_ops;
+		else
+			*(const void **)arg = &bnxt_flow_ops;
 		break;
 	default:
 		PMD_DRV_LOG(ERR,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (30 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
                       ` (2 subsequent siblings)
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

If bp->truflow is not set then don't enable vector mode.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 783e6a4..5d5b8e0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -788,7 +788,8 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_VLAN_FILTER))) {
+		DEV_RX_OFFLOAD_VLAN_FILTER)) &&
+	    !bp->truflow) {
 		PMD_DRV_LOG(INFO, "Using vector mode receive for port %d\n",
 			    eth_dev->data->port_id);
 		bp->flags |= BNXT_FLAG_RX_VECTOR_PKT_MODE;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 33/34] net/bnxt: add support for injecting mark into packet’s mbuf
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (31 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

When a flow is offloaded with MARK action (RTE_FLOW_ACTION_TYPE_MARK),
each packet of that flow will have metadata set in its completion.
This metadata will be used to fetch an index into a mark table where
the actual MARK for that flow is stored. Fetch the MARK from the mark
table and inject it into packet’s mbuf.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c            | 153 ++++++++++++++++++++++++---------
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  55 +++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 ++++
 3 files changed, 183 insertions(+), 43 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index bef9720..40da2f2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -20,6 +20,9 @@
 #include "bnxt_hwrm.h"
 #endif
 
+#include <bnxt_tf_common.h>
+#include <ulp_mark_mgr.h>
+
 /*
  * RX Ring handling
  */
@@ -399,6 +402,109 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
+static void
+bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
+			  struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code;
+	uint32_t meta_fmt;
+	uint32_t meta;
+	uint32_t eem = 0;
+	uint32_t mark_id;
+	uint32_t flags2;
+	int rc;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	flags2 = rte_le_to_cpu_32(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			    BNXT_CFA_META_FMT_SHFT;
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+
+		eem = meta_fmt == BNXT_CFA_META_FMT_EEM;
+
+		/* For EEM flows, The first part of cfa_code is 16 bits.
+		 * The second part is embedded in the
+		 * metadata field from bit 19 onwards. The driver needs to
+		 * ignore the first 19 bits of metadata and use the next 12
+		 * bits as higher 12 bits of cfa_code.
+		 */
+		if (eem)
+			cfa_code |= meta << BNXT_CFA_CODE_META_SHIFT;
+	}
+
+	if (cfa_code) {
+		mbuf->hash.fdir.hi = 0;
+		mbuf->hash.fdir.id = 0;
+		if (eem)
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, true,
+						  cfa_code, &mark_id);
+		else
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, false,
+						  cfa_code, &mark_id);
+		/* If the above fails, simply return and don't add the mark to
+		 * mbuf
+		 */
+		if (rc)
+			return;
+
+		mbuf->hash.fdir.hi	= mark_id;
+		mbuf->udata64		= (cfa_code & 0xffffffffull) << 32;
+		mbuf->hash.fdir.id	= rxcmp1->cfa_code;
+		mbuf->ol_flags		|= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	}
+}
+
+void bnxt_set_mark_in_mbuf(struct bnxt *bp,
+			   struct rx_pkt_cmpl_hi *rxcmp1,
+			   struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code = 0;
+	uint8_t meta_fmt = 0;
+	uint16_t flags2 = 0;
+	uint32_t meta =  0;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	if (!cfa_code)
+		return;
+
+	if (cfa_code && !bp->mark_table[cfa_code].valid)
+		return;
+
+	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			   BNXT_CFA_META_FMT_SHFT;
+
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+	}
+
+	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
+	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+}
+
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
@@ -415,6 +521,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	uint16_t cmp_type;
 	uint32_t flags2_f = 0;
 	uint16_t flags_type;
+	struct bnxt *bp = rxq->bp;
 
 	rxcmp = (struct rx_pkt_cmpl *)
 	    &cpr->cp_desc_ring[cp_cons];
@@ -490,7 +597,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		mbuf->ol_flags |= PKT_RX_RSS_HASH;
 	}
 
-	bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+	if (bp->truflow)
+		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+	else
+		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
@@ -896,44 +1006,3 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 
 	return 0;
 }
-
-void bnxt_set_mark_in_mbuf(struct bnxt *bp,
-			   struct rx_pkt_cmpl_hi *rxcmp1,
-			   struct rte_mbuf *mbuf)
-{
-	uint32_t cfa_code = 0;
-	uint8_t meta_fmt =  0;
-	uint16_t flags2 = 0;
-	uint32_t meta =  0;
-
-	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
-	if (!cfa_code)
-		return;
-
-	if (cfa_code && !bp->mark_table[cfa_code].valid)
-		return;
-
-	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
-	meta = rte_le_to_cpu_32(rxcmp1->metadata);
-	if (meta) {
-		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
-
-		/*
-		 * The flags field holds extra bits of info from [6:4]
-		 * which indicate if the flow is in TCAM or EM or EEM
-		 */
-		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
-			   BNXT_CFA_META_FMT_SHFT;
-
-		/*
-		 * meta_fmt == 4 => 'b100 => 'b10x => EM.
-		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
-		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
-		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
-		 */
-		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
-	}
-
-	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 566668e..ad83531 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -58,7 +58,7 @@ ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
 	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
 
 	if (is_gfid) {
-		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+		BNXT_TF_DBG(DEBUG, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
 
 		mtbl->gfid_tbl[idx].mark_id = mark;
 		mtbl->gfid_tbl[idx].valid = true;
@@ -176,6 +176,59 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 }
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+	uint32_t idx = 0;
+
+	if (!ctxt || !mark)
+		return -EINVAL;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark Table\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get GFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->gfid_tbl[idx].mark_id);
+
+		*mark = mtbl->gfid_tbl[idx].mark_id;
+	} else {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get LFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->lfid_tbl[idx].mark_id);
+
+		*mark = mtbl->lfid_tbl[idx].mark_id;
+	}
+
+	return 0;
+}
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index f0d1515..0f8a5e5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -55,6 +55,24 @@ int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark);
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v3 34/34] net/bnxt: enable meson build on truflow code
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (32 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
@ 2020-04-14  8:13     ` Venkat Duvvuru
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
  34 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-14  8:13 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Include tf_ulp & tf_core directories and the files inside them.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 0c311d2..d75f887 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -1,7 +1,12 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2020 Broadcom
 
 install_headers('rte_pmd_bnxt.h')
+
+includes += include_directories('tf_ulp')
+includes += include_directories('tf_core')
+
 sources = files('bnxt_cpr.c',
 	'bnxt_ethdev.c',
 	'bnxt_filter.c',
@@ -16,6 +21,27 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+
+	'tf_core/tf_core.c',
+	'tf_core/bitalloc.c',
+	'tf_core/tf_msg.c',
+	'tf_core/rand.c',
+	'tf_core/stack.c',
+	'tf_core/tf_em.c',
+	'tf_core/tf_rm.c',
+	'tf_core/tf_tbl.c',
+	'tf_core/tfp.c',
+
+	'tf_ulp/bnxt_ulp.c',
+	'tf_ulp/ulp_mark_mgr.c',
+	'tf_ulp/ulp_flow_db.c',
+	'tf_ulp/ulp_template_db.c',
+	'tf_ulp/ulp_utils.c',
+	'tf_ulp/ulp_mapper.c',
+	'tf_ulp/ulp_matcher.c',
+	'tf_ulp/ulp_rte_parser.c',
+	'tf_ulp/bnxt_ulp_flow.c',
+
 	'rte_pmd_bnxt.c')
 
 if arch_subdir == 'x86'
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
                       ` (33 preceding siblings ...)
  2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
@ 2020-04-15  8:18     ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
                         ` (36 more replies)
  34 siblings, 37 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This patchset introduces a new mechanism to allow host-memory based
flow table management. This should allow higher flow scalability
than what is currently supported. This new approach also defines a
new rte_flow parser, and mapper which currently supports basic packet
classification in receive path. The patchset uses a newly implemented
control-plane firmware interface which optimizes flow insertions and
deletions.

This is a baseline patchset with limited scale. Follow on patches will
add support for more protocol headers, rte_flow attributes, actions
and such.

This is a tech preview feature, hence disabled by default and can be enabled
using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.

v3==>v4
=======
1. Fixed some more compilation issues reported by CI

Ajit Kumar Khaparde (1):
  net/bnxt: add updated dpdk hsi structure

Farah Smith (2):
  net/bnxt: add tf core identifier support
  net/bnxt: add tf core table scope support

Kishore Padmanabha (8):
  net/bnxt: match rte flow items with flow template patterns
  net/bnxt: match rte flow actions with flow template actions
  net/bnxt: add support for rte flow item parsing
  net/bnxt: add support for rte flow action parsing
  net/bnxt: add support for rte flow create driver hook
  net/bnxt: add support for rte flow validate driver hook
  net/bnxt: add support for rte flow destroy driver hook
  net/bnxt: add support for rte flow flush driver hook

Michael Wildt (4):
  net/bnxt: add initial tf core session open
  net/bnxt: add initial tf core session close support
  net/bnxt: add tf core session sram functions
  net/bnxt: add resource manager functionality

Mike Baucom (5):
  net/bnxt: add helper functions for blob/regfile ops
  net/bnxt: add support to process action tables
  net/bnxt: add support to process key tables
  net/bnxt: add support to free key and action tables
  net/bnxt: add support to alloc and program key and act tbls

Pete Spreadborough (2):
  net/bnxt: add truflow message handlers
  net/bnxt: add EM/EEM functionality

Randy Schacher (1):
  net/bnxt: update hwrm prep to use ptr

Shahaji Bhosle (2):
  net/bnxt: add initial tf core resource mgmt support
  net/bnxt: add tf core TCAM support

Venkat Duvvuru (9):
  net/bnxt: fetch SVIF information from the firmware
  net/bnxt: fetch vnic info from DPDK port
  net/bnxt: add devargs parameter for host memory based TRUFLOW feature
  net/bnxt: add support for ULP session manager init
  net/bnxt: add support for ULP session manager cleanup
  net/bnxt: register tf rte flow ops
  net/bnxt: disable vector mode when host based TRUFLOW is enabled
  net/bnxt: add support for injecting mark into packet’s mbuf
  net/bnxt: enable meson build on truflow code

 drivers/net/bnxt/Makefile                       |   24 +
 drivers/net/bnxt/bnxt.h                         |   21 +-
 drivers/net/bnxt/bnxt_ethdev.c                  |  118 +-
 drivers/net/bnxt/bnxt_hwrm.c                    |  319 +-
 drivers/net/bnxt/bnxt_hwrm.h                    |   19 +
 drivers/net/bnxt/bnxt_rxr.c                     |  153 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h          | 3786 ++++++++++++++++++++---
 drivers/net/bnxt/meson.build                    |   26 +
 drivers/net/bnxt/tf_core/bitalloc.c             |  364 +++
 drivers/net/bnxt/tf_core/bitalloc.h             |  119 +
 drivers/net/bnxt/tf_core/hwrm_tf.h              |  992 ++++++
 drivers/net/bnxt/tf_core/lookup3.h              |  162 +
 drivers/net/bnxt/tf_core/rand.c                 |   47 +
 drivers/net/bnxt/tf_core/rand.h                 |   36 +
 drivers/net/bnxt/tf_core/stack.c                |  107 +
 drivers/net/bnxt/tf_core/stack.h                |  107 +
 drivers/net/bnxt/tf_core/tf_core.c              |  659 ++++
 drivers/net/bnxt/tf_core/tf_core.h              | 1376 ++++++++
 drivers/net/bnxt/tf_core/tf_em.c                |  515 +++
 drivers/net/bnxt/tf_core/tf_em.h                |  117 +
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h   |  166 +
 drivers/net/bnxt/tf_core/tf_msg.c               | 1248 ++++++++
 drivers/net/bnxt/tf_core/tf_msg.h               |  256 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h        |   47 +
 drivers/net/bnxt/tf_core/tf_project.h           |   24 +
 drivers/net/bnxt/tf_core/tf_resources.h         |  542 ++++
 drivers/net/bnxt/tf_core/tf_rm.c                | 3297 ++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h                |  321 ++
 drivers/net/bnxt/tf_core/tf_session.h           |  300 ++
 drivers/net/bnxt/tf_core/tf_tbl.c               | 1836 +++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.h               |  126 +
 drivers/net/bnxt/tf_core/tfp.c                  |  163 +
 drivers/net/bnxt/tf_core/tfp.h                  |  188 ++
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h        |   54 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c              |  695 +++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h              |  110 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c         |  303 ++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c           |  626 ++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h           |  156 +
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 1513 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   69 +
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  271 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  111 +
 drivers/net/bnxt/tf_ulp/ulp_matcher.c           |  188 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h           |   35 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c        | 1208 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h        |  203 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 1713 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       |  354 +++
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h |  130 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  266 ++
 drivers/net/bnxt/tf_ulp/ulp_utils.c             |  521 ++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h             |  279 ++
 53 files changed, 25891 insertions(+), 495 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 01/34] net/bnxt: add updated dpdk hsi structure
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
                         ` (35 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Ajit Kumar Khaparde, Randy Schacher

From: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>

- Add most recent bnxt dpdk header.
- HWRM version updated to 1.10.1.30

Signed-off-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 3786 +++++++++++++++++++++++++++++---
 1 file changed, 3436 insertions(+), 350 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index c2bae0f..cde96e7 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (c) 2014-2019 Broadcom Inc.
+ * Copyright (c) 2014-2020 Broadcom Inc.
  * All rights reserved.
  *
  * DO NOT MODIFY!!! This file is automatically generated.
@@ -386,6 +386,8 @@ struct cmd_nums {
 	#define HWRM_PORT_PHY_MDIO_READ                   UINT32_C(0xb6)
 	#define HWRM_PORT_PHY_MDIO_BUS_ACQUIRE            UINT32_C(0xb7)
 	#define HWRM_PORT_PHY_MDIO_BUS_RELEASE            UINT32_C(0xb8)
+	#define HWRM_PORT_QSTATS_EXT_PFC_WD               UINT32_C(0xb9)
+	#define HWRM_PORT_ECN_QSTATS                      UINT32_C(0xba)
 	#define HWRM_FW_RESET                             UINT32_C(0xc0)
 	#define HWRM_FW_QSTATUS                           UINT32_C(0xc1)
 	#define HWRM_FW_HEALTH_CHECK                      UINT32_C(0xc2)
@@ -404,6 +406,8 @@ struct cmd_nums {
 	#define HWRM_FW_GET_STRUCTURED_DATA               UINT32_C(0xcb)
 	/* Experimental */
 	#define HWRM_FW_IPC_MAILBOX                       UINT32_C(0xcc)
+	#define HWRM_FW_ECN_CFG                           UINT32_C(0xcd)
+	#define HWRM_FW_ECN_QCFG                          UINT32_C(0xce)
 	#define HWRM_EXEC_FWD_RESP                        UINT32_C(0xd0)
 	#define HWRM_REJECT_FWD_RESP                      UINT32_C(0xd1)
 	#define HWRM_FWD_RESP                             UINT32_C(0xd2)
@@ -419,6 +423,7 @@ struct cmd_nums {
 	#define HWRM_TEMP_MONITOR_QUERY                   UINT32_C(0xe0)
 	#define HWRM_REG_POWER_QUERY                      UINT32_C(0xe1)
 	#define HWRM_CORE_FREQUENCY_QUERY                 UINT32_C(0xe2)
+	#define HWRM_REG_POWER_HISTOGRAM                  UINT32_C(0xe3)
 	#define HWRM_WOL_FILTER_ALLOC                     UINT32_C(0xf0)
 	#define HWRM_WOL_FILTER_FREE                      UINT32_C(0xf1)
 	#define HWRM_WOL_FILTER_QCFG                      UINT32_C(0xf2)
@@ -510,7 +515,7 @@ struct cmd_nums {
 	#define HWRM_CFA_EEM_OP                           UINT32_C(0x123)
 	/* Experimental */
 	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS              UINT32_C(0x124)
-	/* Experimental */
+	/* Experimental - DEPRECATED */
 	#define HWRM_CFA_TFLIB                            UINT32_C(0x125)
 	/* Engine CKV - Get the current allocation status of keys provisioned in the key vault. */
 	#define HWRM_ENGINE_CKV_STATUS                    UINT32_C(0x12e)
@@ -629,6 +634,56 @@ struct cmd_nums {
 	 * to the host test.
 	 */
 	#define HWRM_MFG_HDMA_TEST                        UINT32_C(0x209)
+	/* Tells the fw to program the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_WRITE                 UINT32_C(0x20a)
+	/* Tells the fw to read the fru memory */
+	#define HWRM_MFG_FRU_EEPROM_READ                  UINT32_C(0x20b)
+	/* Experimental */
+	#define HWRM_TF                                   UINT32_C(0x2bc)
+	/* Experimental */
+	#define HWRM_TF_VERSION_GET                       UINT32_C(0x2bd)
+	/* Experimental */
+	#define HWRM_TF_SESSION_OPEN                      UINT32_C(0x2c6)
+	/* Experimental */
+	#define HWRM_TF_SESSION_ATTACH                    UINT32_C(0x2c7)
+	/* Experimental */
+	#define HWRM_TF_SESSION_CLOSE                     UINT32_C(0x2c8)
+	/* Experimental */
+	#define HWRM_TF_SESSION_QCFG                      UINT32_C(0x2c9)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_QCAPS                UINT32_C(0x2ca)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_ALLOC                UINT32_C(0x2cb)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FREE                 UINT32_C(0x2cc)
+	/* Experimental */
+	#define HWRM_TF_SESSION_RESC_FLUSH                UINT32_C(0x2cd)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_GET                      UINT32_C(0x2d0)
+	/* Experimental */
+	#define HWRM_TF_TBL_TYPE_SET                      UINT32_C(0x2d1)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_RGTR                     UINT32_C(0x2da)
+	/* Experimental */
+	#define HWRM_TF_CTXT_MEM_UNRGTR                   UINT32_C(0x2db)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCAPS                      UINT32_C(0x2dc)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_OP                         UINT32_C(0x2dd)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_CFG                        UINT32_C(0x2de)
+	/* Experimental */
+	#define HWRM_TF_EXT_EM_QCFG                       UINT32_C(0x2df)
+	/* Experimental */
+	#define HWRM_TF_TCAM_SET                          UINT32_C(0x2ee)
+	/* Experimental */
+	#define HWRM_TF_TCAM_GET                          UINT32_C(0x2ef)
+	/* Experimental */
+	#define HWRM_TF_TCAM_MOVE                         UINT32_C(0x2f0)
+	/* Experimental */
+	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2f1)
+	/* Experimental */
+	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
 	#define HWRM_DBG_READ_DIRECT                      UINT32_C(0xff10)
 	/* Experimental */
@@ -658,6 +713,8 @@ struct cmd_nums {
 	#define HWRM_DBG_CRASHDUMP_HEADER                 UINT32_C(0xff1d)
 	/* Experimental */
 	#define HWRM_DBG_CRASHDUMP_ERASE                  UINT32_C(0xff1e)
+	/* Send driver debug information to firmware */
+	#define HWRM_DBG_DRV_TRACE                        UINT32_C(0xff1f)
 	/* Experimental */
 	#define HWRM_NVM_FACTORY_DEFAULTS                 UINT32_C(0xffee)
 	#define HWRM_NVM_VALIDATE_OPTION                  UINT32_C(0xffef)
@@ -857,8 +914,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 6
-#define HWRM_VERSION_STR "1.10.1.6"
+#define HWRM_VERSION_RSVD 30
+#define HWRM_VERSION_STR "1.10.1.30"
 
 /****************
  * hwrm_ver_get *
@@ -1143,6 +1200,7 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_ADV_FLOW_MGNT_SUPPORTED \
 		UINT32_C(0x1000)
 	/*
+	 * Deprecated and replaced with cfa_truflow_supported.
 	 * If set to 1, the firmware is able to support TFLIB features.
 	 * If set to 0, then the firmware doesn’t support TFLIB features.
 	 * By default, this flag should be 0 for older version of core firmware.
@@ -1150,6 +1208,14 @@ struct hwrm_ver_get_output {
 	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TFLIB_SUPPORTED \
 		UINT32_C(0x2000)
 	/*
+	 * If set to 1, the firmware is able to support TruFlow features.
+	 * If set to 0, then the firmware doesn’t support TruFlow features.
+	 * By default, this flag should be 0 for older version of
+	 * core firmware.
+	 */
+	#define HWRM_VER_GET_OUTPUT_DEV_CAPS_CFG_CFA_TRUFLOW_SUPPORTED \
+		UINT32_C(0x4000)
+	/*
 	 * This field represents the major version of RoCE firmware.
 	 * A change in major version represents a major release.
 	 */
@@ -4508,10 +4574,16 @@ struct hwrm_async_event_cmpl {
 	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_EEM_CFG_CHANGE \
 		UINT32_C(0x3c)
-	/* TFLIB unique default VNIC Configuration Change */
+	/*
+	 * Deprecated.
+	 * TFLIB unique default VNIC Configuration Change
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_DEFAULT_VNIC_CHANGE \
 		UINT32_C(0x3d)
-	/* TFLIB unique link status changed */
+	/*
+	 * Deprecated.
+	 * TFLIB unique link status changed
+	 */
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_LINK_STATUS_CHANGE \
 		UINT32_C(0x3e)
 	/*
@@ -4521,6 +4593,19 @@ struct hwrm_async_event_cmpl {
 	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_QUIESCE_DONE \
 		UINT32_C(0x3f)
 	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred. This event is used on crypto controllers
+	 * only.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	/*
+	 * An event signifying that a PFC WatchDog configuration
+	 * has changed on any port / cos.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	/*
 	 * A trace log message. This contains firmware trace logs string
 	 * embedded in the asynchronous message. This is an experimental
 	 * event, not meant for production use at this time.
@@ -6393,6 +6478,36 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x2)
 	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_LAST \
 		HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_QUIESCE_STATUS_ERROR
+	/* opaque is 8 b */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_MASK \
+		UINT32_C(0xff00)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_OPAQUE_SFT \
+		8
+	/*
+	 * Additional information about internal hardware state related to
+	 * idle/quiesce state.  QUIESCE may succeed per quiesce_status
+	 * regardless of idle_state_flags.  If QUIESCE fails, the host may
+	 * inspect idle_state_flags to determine whether a retry is warranted.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_MASK \
+		UINT32_C(0xff0000)
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_SFT \
+		16
+	/*
+	 * Failure to quiesce is caused by host not updating the NQ consumer
+	 * index.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_INCOMPLETE_NQ \
+		UINT32_C(0x10000)
+	/* Flag 1 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_1 \
+		UINT32_C(0x20000)
+	/* Flag 2 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_2 \
+		UINT32_C(0x40000)
+	/* Flag 3 indicating partial non-idle state. */
+	#define HWRM_ASYNC_EVENT_CMPL_QUIESCE_DONE_EVENT_DATA2_IDLE_STATE_FLAGS_IDLE_STATUS_3 \
+		UINT32_C(0x80000)
 	uint8_t	opaque_v;
 	/*
 	 * This value is written by the NIC such that it will be different
@@ -6414,6 +6529,152 @@ struct hwrm_async_event_cmpl_quiesce_done {
 		UINT32_C(0x1)
 } __attribute__((packed));
 
+/* hwrm_async_event_cmpl_deferred_response (size:128b/16B) */
+struct hwrm_async_event_cmpl_deferred_response {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_SFT             0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/*
+	 * An event signifying a HWRM command is in progress and its
+	 * response will be deferred
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE \
+		UINT32_C(0x40)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_ID_DEFERRED_RESPONSE
+	/* Event specific data */
+	uint32_t	event_data2;
+	/*
+	 * The PF's mailbox is clear to issue another command.
+	 * A command with this seq_id is still in progress
+	 * and will return a regualr HWRM completion when done.
+	 * 'event_data1' field, if non-zero, contains the estimated
+	 * execution time for the command.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_MASK \
+		UINT32_C(0xffff)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_EVENT_DATA2_SEQ_ID_SFT \
+		0
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_DEFERRED_RESPONSE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Estimated remaining time of command execution in ms (if not zero) */
+	uint32_t	event_data1;
+} __attribute__((packed));
+
+/* hwrm_async_event_cmpl_pfc_watchdog_cfg_change (size:128b/16B) */
+struct hwrm_async_event_cmpl_pfc_watchdog_cfg_change {
+	uint16_t	type;
+	/*
+	 * This field indicates the exact type of the completion.
+	 * By convention, the LSB identifies the length of the
+	 * record in 16B units. Even values indicate 16B
+	 * records. Odd values indicate 32B
+	 * records.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_MASK \
+		UINT32_C(0x3f)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_SFT \
+		0
+	/* HWRM Asynchronous Event Information */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT \
+		UINT32_C(0x2e)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT
+	/* Identifiers of events. */
+	uint16_t	event_id;
+	/* PFC watchdog configuration change for given port/cos */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE \
+		UINT32_C(0x41)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_LAST \
+		HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE
+	/* Event specific data */
+	uint32_t	event_data2;
+	uint8_t	opaque_v;
+	/*
+	 * This value is written by the NIC such that it will be different
+	 * for each pass through the completion queue. The even passes
+	 * will write 1. The odd passes will write 0.
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_V \
+		UINT32_C(0x1)
+	/* opaque is 7 b */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_MASK \
+		UINT32_C(0xfe)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_OPAQUE_SFT 1
+	/* 8-lsb timestamp from POR (100-msec resolution) */
+	uint8_t	timestamp_lo;
+	/* 16-lsb timestamp from POR (100-msec resolution) */
+	uint16_t	timestamp_hi;
+	/* Event specific data */
+	uint32_t	event_data1;
+	/*
+	 * 1 in bit position X indicates PFC watchdog should
+	 * be on for COSX
+	 */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_MASK \
+		UINT32_C(0xff)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_SFT \
+		0
+	/* 1 means PFC WD for COS0 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS0 \
+		UINT32_C(0x1)
+	/* 1 means PFC WD for COS1 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS1 \
+		UINT32_C(0x2)
+	/* 1 means PFC WD for COS2 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS2 \
+		UINT32_C(0x4)
+	/* 1 means PFC WD for COS3 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS3 \
+		UINT32_C(0x8)
+	/* 1 means PFC WD for COS4 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS4 \
+		UINT32_C(0x10)
+	/* 1 means PFC WD for COS5 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS5 \
+		UINT32_C(0x20)
+	/* 1 means PFC WD for COS6 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS6 \
+		UINT32_C(0x40)
+	/* 1 means PFC WD for COS7 is on, 0 - off. */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PFC_WD_COS_PFC_WD_COS7 \
+		UINT32_C(0x80)
+	/* PORT ID */
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_MASK \
+		UINT32_C(0xffff00)
+	#define HWRM_ASYNC_EVENT_CMPL_PFC_WATCHDOG_CFG_CHANGE_EVENT_DATA1_PORT_ID_SFT \
+		8
+} __attribute__((packed));
+
 /* hwrm_async_event_cmpl_fw_trace_msg (size:128b/16B) */
 struct hwrm_async_event_cmpl_fw_trace_msg {
 	uint16_t	type;
@@ -7220,7 +7481,7 @@ struct hwrm_func_qcaps_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcaps_output (size:640b/80B) */
+/* hwrm_func_qcaps_output (size:704b/88B) */
 struct hwrm_func_qcaps_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -7441,6 +7702,33 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_NOTIFY_VF_DEF_VNIC_CHNG_SUPPORTED \
 		UINT32_C(0x4000000)
+	/* If set to 1, then the vlan acceleration for TX is disabled. */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED \
+		UINT32_C(0x8000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_COREDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_COREDUMP_CMD_SUPPORTED \
+		UINT32_C(0x10000000)
+	/*
+	 * When this bit is '1', it indicates that core firmware supports
+	 * DBG_CRASHDUMP_XXX commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_CRASHDUMP_CMD_SUPPORTED \
+		UINT32_C(0x20000000)
+	/*
+	 * If the query is for a VF, then this flag should be ignored.
+	 * If the query is for a PF and this flag is set to 1, then
+	 * the PF has the capability to support retrieval of
+	 * rx_port_stats_ext_pfc_wd statistics (supported by the PFC
+	 * WatchDog feature) via the hwrm_port_qstats_ext_pfc_wd command.
+	 * If this flag is set to 1, only that (supported) command should
+	 * be used for retrieval of PFC related statistics (rather than
+	 * hwrm_port_qstats_ext command, which could previously be used).
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_PFC_WD_STATS_SUPPORTED \
+		UINT32_C(0x40000000)
 	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
@@ -7551,7 +7839,22 @@ struct hwrm_func_qcaps_output {
 	 * (max_tx_rings) to the function.
 	 */
 	uint16_t	max_sp_tx_rings;
-	uint8_t	unused_0;
+	uint8_t	unused_0[2];
+	uint32_t	flags_ext;
+	/*
+	 * If 1, the device can be configured to set the ECN bits in the
+	 * IP header of received packets if the receive queue length
+	 * exceeds a given threshold.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_MARK_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * If 1, the device can report the number of received packets
+	 * that it marked as having experienced congestion.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_ECN_STATS_SUPPORTED \
+		UINT32_C(0x2)
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -7606,7 +7909,7 @@ struct hwrm_func_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_func_qcfg_output (size:704b/88B) */
+/* hwrm_func_qcfg_output (size:768b/96B) */
 struct hwrm_func_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -8016,7 +8319,17 @@ struct hwrm_func_qcfg_output {
 	 * this value to find out the doorbell page offset from the BAR.
 	 */
 	uint16_t	legacy_l2_db_size_kb;
-	uint8_t	unused_2[1];
+	uint16_t	svif_info;
+	/*
+	 * This field specifies the source virtual interface of the function being
+	 * queried. Drivers can use this to program svif field in the L2 context
+	 * table
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK      UINT32_C(0x7fff)
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_SFT       0
+	/* This field specifies whether svif is valid or not */
+	#define HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID     UINT32_C(0x8000)
+	uint8_t	unused_2[7];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -9862,8 +10175,12 @@ struct hwrm_func_backing_store_qcaps_output {
 	uint32_t	rsvd;
 	/* Reserved for future. */
 	uint16_t	rsvd1;
-	/* Reserved for future. */
-	uint8_t	rsvd2;
+	/*
+	 * Count of TQM fastpath rings to be used for allocating backing store.
+	 * Backing store configuration must be specified for each TQM ring from
+	 * this count in `backing_store_cfg`.
+	 */
+	uint8_t	tqm_fp_rings_count;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -12178,116 +12495,163 @@ struct hwrm_error_recovery_qcfg_output {
 	 * this much time after writing reset_reg_val in reset_reg.
 	 */
 	uint8_t	delay_after_reset[16];
-	uint8_t	unused_1[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM.  This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal
-	 * processor, the order of writes has to be such that this field
-	 * is written last.
-	 */
-	uint8_t	valid;
-} __attribute__((packed));
-
-/***********************
- * hwrm_func_vlan_qcfg *
- ***********************/
-
-
-/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
-struct hwrm_func_vlan_qcfg_input {
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/*
-	 * The completion ring to send the completion event on. This should
-	 * be the NQ ID returned from the `nq_alloc` HWRM command.
-	 */
-	uint16_t	cmpl_ring;
-	/*
-	 * The sequence ID is used by the driver for tracking multiple
-	 * commands. This ID is treated as opaque data by the firmware and
-	 * the value is returned in the `hwrm_resp_hdr` upon completion.
-	 */
-	uint16_t	seq_id;
-	/*
-	 * The target ID of the command:
-	 * * 0x0-0xFFF8 - The function ID
-	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
-	 * * 0xFFFD - Reserved for user-space HWRM interface
-	 * * 0xFFFF - HWRM
+	 * Error recovery counter.
+	 * Lower 2 bits indicates address space location and upper 30 bits
+	 * indicates actual address.
+	 * A value of 0xFFFF-FFFF indicates this register does not exist.
 	 */
-	uint16_t	target_id;
+	uint32_t	err_recovery_cnt_reg;
+	/* Lower 2 bits indicates address space location. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_MASK \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_SFT \
+		0
 	/*
-	 * A physical address pointer pointing to a host buffer that the
-	 * command's response data will be written. This can be either a host
-	 * physical address (HPA) or a guest physical address (GPA) and must
-	 * point to a physically contiguous block of memory.
+	 * If value is 0, this register is located in PCIe config space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint64_t	resp_addr;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_PCIE_CFG \
+		UINT32_C(0x0)
 	/*
-	 * Function ID of the function that is being
-	 * configured.
-	 * If set to 0xFF... (All Fs), then the configuration is
-	 * for the requesting function.
+	 * If value is 1, this register is located in GRC address space.
+	 * Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	fid;
-	uint8_t	unused_0[6];
-} __attribute__((packed));
-
-/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
-struct hwrm_func_vlan_qcfg_output {
-	/* The specific error status for the command. */
-	uint16_t	error_code;
-	/* The HWRM command request type. */
-	uint16_t	req_type;
-	/* The sequence ID from the original command. */
-	uint16_t	seq_id;
-	/* The length of the response data in number of bytes. */
-	uint16_t	resp_len;
-	uint64_t	unused_0;
-	/* S-TAG VLAN identifier configured for the function. */
-	uint16_t	stag_vid;
-	/* S-TAG PCP value configured for the function. */
-	uint8_t	stag_pcp;
-	uint8_t	unused_1;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_GRC \
+		UINT32_C(0x1)
 	/*
-	 * S-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 2, this register is located in first BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	stag_tpid;
-	/* C-TAG VLAN identifier configured for the function. */
-	uint16_t	ctag_vid;
-	/* C-TAG PCP value configured for the function. */
-	uint8_t	ctag_pcp;
-	uint8_t	unused_2;
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR0 \
+		UINT32_C(0x2)
 	/*
-	 * C-TAG TPID value configured for the function. This field is specified in
-	 * network byte order.
+	 * If value is 3, this register is located in second BAR address
+	 * space. Drivers have to map appropriate window to access this
+	 * register.
 	 */
-	uint16_t	ctag_tpid;
-	/* Future use. */
-	uint32_t	rsvd2;
-	/* Future use. */
-	uint32_t	rsvd3;
-	uint8_t	unused_3[3];
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1 \
+		UINT32_C(0x3)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_LAST \
+		HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SPACE_BAR1
+	/* Upper 30bits of the register address. */
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_MASK \
+		UINT32_C(0xfffffffc)
+	#define HWRM_ERROR_RECOVERY_QCFG_OUTPUT_ERR_RECOVERY_CNT_REG_ADDR_SFT \
+		2
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
 	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/**********************
- * hwrm_func_vlan_cfg *
- **********************/
+/***********************
+ * hwrm_func_vlan_qcfg *
+ ***********************/
 
 
-/* hwrm_func_vlan_cfg_input (size:384b/48B) */
-struct hwrm_func_vlan_cfg_input {
+/* hwrm_func_vlan_qcfg_input (size:192b/24B) */
+struct hwrm_func_vlan_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Function ID of the function that is being
+	 * configured.
+	 * If set to 0xFF... (All Fs), then the configuration is
+	 * for the requesting function.
+	 */
+	uint16_t	fid;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_func_vlan_qcfg_output (size:320b/40B) */
+struct hwrm_func_vlan_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint64_t	unused_0;
+	/* S-TAG VLAN identifier configured for the function. */
+	uint16_t	stag_vid;
+	/* S-TAG PCP value configured for the function. */
+	uint8_t	stag_pcp;
+	uint8_t	unused_1;
+	/*
+	 * S-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	stag_tpid;
+	/* C-TAG VLAN identifier configured for the function. */
+	uint16_t	ctag_vid;
+	/* C-TAG PCP value configured for the function. */
+	uint8_t	ctag_pcp;
+	uint8_t	unused_2;
+	/*
+	 * C-TAG TPID value configured for the function. This field is specified in
+	 * network byte order.
+	 */
+	uint16_t	ctag_tpid;
+	/* Future use. */
+	uint32_t	rsvd2;
+	/* Future use. */
+	uint32_t	rsvd3;
+	uint8_t	unused_3[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_func_vlan_cfg *
+ **********************/
+
+
+/* hwrm_func_vlan_cfg_input (size:384b/48B) */
+struct hwrm_func_vlan_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -14039,6 +14403,9 @@ struct hwrm_port_phy_qcfg_output {
 	/* Module is not inserted. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTINSERTED \
 		UINT32_C(0x4)
+	/* Module is powered down becuase of over current fault. */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_CURRENTFAULT \
+		UINT32_C(0x5)
 	/* Module status is not applicable. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_MODULE_STATUS_NOTAPPLICABLE \
 		UINT32_C(0xff)
@@ -15010,7 +15377,7 @@ struct hwrm_port_mac_qcfg_input {
 	uint8_t	unused_0[6];
 } __attribute__((packed));
 
-/* hwrm_port_mac_qcfg_output (size:192b/24B) */
+/* hwrm_port_mac_qcfg_output (size:256b/32B) */
 struct hwrm_port_mac_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -15250,6 +15617,20 @@ struct hwrm_port_mac_qcfg_output {
 		UINT32_C(0xe0)
 	#define HWRM_PORT_MAC_QCFG_OUTPUT_COS_FIELD_CFG_DEFAULT_COS_SFT \
 		5
+	uint8_t	unused_1;
+	uint16_t	port_svif_info;
+	/*
+	 * This field specifies the source virtual interface of the port being
+	 * queried. Drivers can use this to program port svif field in the
+	 * L2 context table
+	 */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK \
+		UINT32_C(0x7fff)
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_SFT       0
+	/* This field specifies whether port_svif is valid or not */
+	#define HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID \
+		UINT32_C(0x8000)
+	uint8_t	unused_2[5];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -15322,17 +15703,17 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_DIRECT_ACCESS \
 		UINT32_C(0x1)
 	/*
-	 * When this bit is set to '1', the PTP information is accessible
-	 * via HWRM commands.
-	 */
-	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
-		UINT32_C(0x2)
-	/*
 	 * When this bit is set to '1', the device supports one-step
 	 * Tx timestamping.
 	 */
 	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_ONE_STEP_TX_TS \
 		UINT32_C(0x4)
+	/*
+	 * When this bit is set to '1', the PTP information is accessible
+	 * via HWRM commands.
+	 */
+	#define HWRM_PORT_MAC_PTP_QCFG_OUTPUT_FLAGS_HWRM_ACCESS \
+		UINT32_C(0x8)
 	uint8_t	unused_0[3];
 	/* Offset of the PTP register for the lower 32 bits of timestamp for RX. */
 	uint32_t	rx_ts_reg_off_lower;
@@ -15375,7 +15756,7 @@ struct hwrm_port_mac_ptp_qcfg_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics Formats */
+/* Port Tx Statistics Format */
 /* tx_port_stats (size:3264b/408B) */
 struct tx_port_stats {
 	/* Total Number of 64 Bytes frames transmitted */
@@ -15516,7 +15897,7 @@ struct tx_port_stats {
 	uint64_t	tx_stat_error;
 } __attribute__((packed));
 
-/* Port Rx Statistics Formats */
+/* Port Rx Statistics Format */
 /* rx_port_stats (size:4224b/528B) */
 struct rx_port_stats {
 	/* Total Number of 64 Bytes frames received */
@@ -15806,7 +16187,7 @@ struct hwrm_port_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
-/* Port Tx Statistics extended Formats */
+/* Port Tx Statistics extended Format */
 /* tx_port_stats_ext (size:2048b/256B) */
 struct tx_port_stats_ext {
 	/* Total number of tx bytes count on cos queue 0 */
@@ -15875,7 +16256,7 @@ struct tx_port_stats_ext {
 	uint64_t	pfc_pri7_tx_transitions;
 } __attribute__((packed));
 
-/* Port Rx Statistics extended Formats */
+/* Port Rx Statistics extended Format */
 /* rx_port_stats_ext (size:3648b/456B) */
 struct rx_port_stats_ext {
 	/* Number of times link state changed to down */
@@ -15997,6 +16378,424 @@ struct rx_port_stats_ext {
 	uint64_t	rx_discard_packets_cos7;
 } __attribute__((packed));
 
+/*
+ * Port Rx Statistics extended PFC WatchDog Format.
+ * StormDetect and StormRevert event determination is based
+ * on an integration period and a percentage threshold.
+ * StormDetect event - when percentage of XOFF frames receieved
+ * within an integration period exceeds the configured threshold.
+ * StormRevert event - when percentage of XON frames received
+ * within an integration period exceeds the configured threshold.
+ * Actual number of XOFF/XON frames for the events to be triggered
+ * depends on both configured integration period and sampling rate.
+ * The statistics in this structure represent counts of specified
+ * events from the moment the feature (PFC WatchDog) is enabled via
+ * hwrm_queue_pfc_enable_cfg call.
+ */
+/* rx_port_stats_ext_pfc_wd (size:5120b/640B) */
+struct rx_port_stats_ext_pfc_wd {
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri0;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri1;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri2;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri3;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri4;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri5;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri6;
+	/*
+	 * Total number of PFC WatchDog StormDetect events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_detected_pri7;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri0;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri1;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri2;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri3;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri4;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri5;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri6;
+	/*
+	 * Total number of PFC WatchDog StormRevert events detected
+	 * for Pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_reverted_pri7;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri0;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri1;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri2;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri3;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri4;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri5;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri6;
+	/*
+	 * Total number of packets received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_pri7;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri0;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri1;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri2;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri3;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri4;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri5;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri6;
+	/*
+	 * Total number of bytes received during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_pri7;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri0;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri1;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri2;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri3;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri4;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri5;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri6;
+	/*
+	 * Total number of packets dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri0;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri1;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri2;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri3;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri4;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri5;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri6;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_storms_rx_bytes_dropped_pri7;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri0;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri1;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri2;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri3;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri4;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri5;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri6;
+	/*
+	 * Number of packets received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_pri7;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri0;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri1;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri2;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri3;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri4;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri5;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri6;
+	/*
+	 * Number of bytes received during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_pri7;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri0;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri1;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri2;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri3;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri4;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri5;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri6;
+	/*
+	 * Number of packets dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_packets_dropped_pri7;
+	/*
+	 * Total number of bytes dropped on rx during PFC watchdog storm
+	 * for pri 0
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri0;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 * for pri 1
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri1;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 2
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri2;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 3
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri3;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 4
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri4;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 5
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri5;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 6
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri6;
+	/*
+	 * Number of bytes dropped on rx during last PFC watchdog storm
+	 *  for pri 7
+	 */
+	uint64_t	rx_pfc_watchdog_last_storm_rx_bytes_dropped_pri7;
+} __attribute__((packed));
+
 /************************
  * hwrm_port_qstats_ext *
  ************************/
@@ -16090,6 +16889,83 @@ struct hwrm_port_qstats_ext_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/*******************************
+ * hwrm_port_qstats_ext_pfc_wd *
+ *******************************/
+
+
+/* hwrm_port_qstats_ext_pfc_wd_input (size:256b/32B) */
+struct hwrm_port_qstats_ext_pfc_wd_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Port ID of port that is being queried. */
+	uint16_t	port_id;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * block in bytes
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	unused_0[4];
+	/*
+	 * This is the host address where
+	 * rx_port_stats_ext_pfc_wd will be stored
+	 */
+	uint64_t	pfc_wd_stat_host_addr;
+} __attribute__((packed));
+
+/* hwrm_port_qstats_ext_pfc_wd_output (size:128b/16B) */
+struct hwrm_port_qstats_ext_pfc_wd_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * The size of rx_port_stats_ext_pfc_wd
+	 * statistics block in bytes.
+	 */
+	uint16_t	pfc_wd_stat_size;
+	uint8_t	flags;
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+	uint8_t	unused_0[4];
+} __attribute__((packed));
+
 /*************************
  * hwrm_port_lpbk_qstats *
  *************************/
@@ -16168,6 +17044,91 @@ struct hwrm_port_lpbk_qstats_output {
 	uint8_t	valid;
 } __attribute__((packed));
 
+/************************
+ * hwrm_port_ecn_qstats *
+ ************************/
+
+
+/* hwrm_port_ecn_qstats_input (size:192b/24B) */
+struct hwrm_port_ecn_qstats_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Port ID of port that is being queried. Unused if NIC is in
+	 * multi-host mode.
+	 */
+	uint16_t	port_id;
+	uint8_t	unused_0[6];
+} __attribute__((packed));
+
+/* hwrm_port_ecn_qstats_output (size:384b/48B) */
+struct hwrm_port_ecn_qstats_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Number of packets marked in CoS queue 0. */
+	uint32_t	mark_cnt_cos0;
+	/* Number of packets marked in CoS queue 1. */
+	uint32_t	mark_cnt_cos1;
+	/* Number of packets marked in CoS queue 2. */
+	uint32_t	mark_cnt_cos2;
+	/* Number of packets marked in CoS queue 3. */
+	uint32_t	mark_cnt_cos3;
+	/* Number of packets marked in CoS queue 4. */
+	uint32_t	mark_cnt_cos4;
+	/* Number of packets marked in CoS queue 5. */
+	uint32_t	mark_cnt_cos5;
+	/* Number of packets marked in CoS queue 6. */
+	uint32_t	mark_cnt_cos6;
+	/* Number of packets marked in CoS queue 7. */
+	uint32_t	mark_cnt_cos7;
+	/*
+	 * Bitmask that indicates which CoS queues have ECN marking enabled.
+	 * Bit i corresponds to CoS queue i.
+	 */
+	uint8_t	mark_en;
+	uint8_t	unused_0[6];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
 /***********************
  * hwrm_port_clr_stats *
  ***********************/
@@ -18322,7 +19283,7 @@ struct hwrm_port_phy_mdio_bus_acquire_input {
 	 * Timeout in milli seconds, MDIO BUS will be released automatically
 	 * after this time, if another mdio acquire command is not received
 	 * within the timeout window from the same client.
-	 * A 0xFFFF will hold the bus until this bus is released.
+	 * A 0xFFFF will hold the bus untill this bus is released.
 	 */
 	uint16_t	mdio_bus_timeout;
 	uint8_t	unused_0[2];
@@ -19158,6 +20119,30 @@ struct hwrm_queue_pfcenable_qcfg_output {
 	/* If set to 1, then PFC is enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_QCFG_OUTPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	uint8_t	unused_0[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -19229,6 +20214,30 @@ struct hwrm_queue_pfcenable_cfg_input {
 	/* If set to 1, then PFC is requested to be enabled on PRI 7. */
 	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_ENABLED \
 		UINT32_C(0x80)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI0. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI0_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x100)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI1. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI1_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x200)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI2. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI2_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x400)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI3. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI3_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x800)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI4. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI4_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x1000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI5. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI5_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x2000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI6. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI6_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x4000)
+	/* If set to 1, then PFC WatchDog is requested to be enabled on PRI7. */
+	#define HWRM_QUEUE_PFCENABLE_CFG_INPUT_FLAGS_PRI7_PFC_WATCHDOG_ENABLED \
+		UINT32_C(0x8000)
 	/*
 	 * Port ID of port for which the table is being configured.
 	 * The HWRM needs to check whether this function is allowed
@@ -31831,15 +32840,2172 @@ struct hwrm_cfa_eem_qcfg_input {
 	 */
 	uint64_t	resp_addr;
 	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint32_t	unused_0;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint32_t	unused_0;
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
+struct hwrm_cfa_eem_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/* When set to 1, indicates the configuration is the TX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
+		UINT32_C(0x1)
+	/* When set to 1, indicates the configuration is the RX flow. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
+		UINT32_C(0x2)
+	/* When set to 1, all offloaded flows will be sent to EEM. */
+	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x4)
+	/* The number of entries the FW has configured for EEM. */
+	uint32_t	num_entries;
+	/* Configured EEM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EEM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EEM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EEM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	uint8_t	unused_2[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*******************
+ * hwrm_cfa_eem_op *
+ *******************/
+
+
+/* hwrm_cfa_eem_op_input (size:192b/24B) */
+struct hwrm_cfa_eem_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the TX flow offload function specified in fid.
+	 * Note if this bit is set then the path_rx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the host memory which is passed will be
+	 * used for the RX flow offload function specified in fid.
+	 * Note if this bit is set then the path_tx bit can't be set.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
+	uint16_t	unused_0;
+	/* The number of EEM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
+	/*
+	 * To properly stop EEM and ensure there are no DMA's, the caller
+	 * must disable EEM for the given PF, using this call. This will
+	 * safely disable EEM and ensure that all DMA'ed to the
+	 * keys/records/efc have been completed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EEM host memory has been configured, EEM options have
+	 * been configured. Then the caller should enable EEM for the given
+	 * PF. Note once this call has been made, then the EEM mechanism
+	 * will be active and DMA's will occur as packets are processed.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EEM settings for the given PF so that the register values
+	 * are reset back to there initial state.
+	 */
+	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
+	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
+		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+} __attribute__((packed));
+
+/* hwrm_cfa_eem_op_output (size:128b/16B) */
+struct hwrm_cfa_eem_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************************
+ * hwrm_cfa_adv_flow_mgnt_qcaps *
+ ********************************/
+
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	unused_0[4];
+} __attribute__((packed));
+
+/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
+struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * Value of 1 to indicate firmware support 16-bit flow handle.
+	 * Value of 0 to indicate firmware not support 16-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * Value of 1 to indicate firmware support 64-bit flow handle.
+	 * Value of 0 to indicate firmware not support 64-bit flow handle.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
+		UINT32_C(0x2)
+	/*
+	 * Value of 1 to indicate firmware support flow batch delete operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 to indicate that the firmware does not support flow batch delete
+	 * operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
+		UINT32_C(0x4)
+	/*
+	 * Value of 1 to indicate that the firmware support flow reset all operation through
+	 * HWRM_CFA_FLOW_FLUSH command.
+	 * Value of 0 indicates firmware does not support flow reset all operation.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
+		UINT32_C(0x8)
+	/*
+	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
+	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
+	 * Value of 0 indicates firmware does not support use of FID as dest_id.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
+		UINT32_C(0x10)
+	/*
+	 * Value of 1 to indicate that firmware supports TX EEM flows.
+	 * Value of 0 indicates firmware does not support TX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x20)
+	/*
+	 * Value of 1 to indicate that firmware supports RX EEM flows.
+	 * Value of 0 indicates firmware does not support RX EEM flows.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
+		UINT32_C(0x40)
+	/*
+	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
+	 * on-chip flow counter which can be used for EEM flows.
+	 * Value of 0 indicates firmware does not support the dynamic allocation of an
+	 * on-chip flow counter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
+		UINT32_C(0x80)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
+	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
+		UINT32_C(0x100)
+	/*
+	 * Value of 1 to indicate that firmware supports untagged matching
+	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
+	 * indicates firmware does not support untagged matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
+		UINT32_C(0x200)
+	/*
+	 * Value of 1 to indicate that firmware supports XDP filter. Value
+	 * of 0 indicates firmware does not support XDP filter.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
+		UINT32_C(0x400)
+	/*
+	 * Value of 1 to indicate that the firmware support L2 header source
+	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
+	 * Value of 0 indicates firmware does not support L2 header source
+	 * fields matching.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
+		UINT32_C(0x800)
+	/*
+	 * If set to 1, firmware is capable of supporting ARP ethertype as
+	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
+	 * RX direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
+		UINT32_C(0x1000)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
+	 * command. Value of 0 indicates firmware does not support
+	 * rfs_ring_tbl_idx in dst_id field.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
+		UINT32_C(0x2000)
+	/*
+	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
+	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
+	 * direction. By default, this flag should be 0 for older version
+	 * of firmware.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
+		UINT32_C(0x4000)
+	uint8_t	unused_0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************
+ * hwrm_cfa_tflib *
+ ******************/
+
+
+/* hwrm_cfa_tflib_input (size:1024b/128B) */
+struct hwrm_cfa_tflib_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TFLIB request data. */
+	uint32_t	tf_req[26];
+} __attribute__((packed));
+
+/* hwrm_cfa_tflib_output (size:5632b/704B) */
+struct hwrm_cfa_tflib_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TFLIB message type. */
+	uint16_t	tf_type;
+	/* TFLIB message subtype. */
+	uint16_t	tf_subtype;
+	/* TFLIB response code */
+	uint32_t	tf_resp_code;
+	/* TFLIB response data. */
+	uint32_t	tf_resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********
+ * hwrm_tf *
+ ***********/
+
+
+/* hwrm_tf_input (size:1024b/128B) */
+struct hwrm_tf_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* unused. */
+	uint8_t	unused0[4];
+	/* TF request data. */
+	uint32_t	req[26];
+} __attribute__((packed));
+
+/* hwrm_tf_output (size:5632b/704B) */
+struct hwrm_tf_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* TF message type. */
+	uint16_t	type;
+	/* TF message subtype. */
+	uint16_t	subtype;
+	/* TF response code */
+	uint32_t	resp_code;
+	/* TF response data. */
+	uint32_t	resp[170];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_version_get *
+ ***********************/
+
+
+/* hwrm_tf_version_get_input (size:128b/16B) */
+struct hwrm_tf_version_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_version_get_output (size:128b/16B) */
+struct hwrm_tf_version_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Version Major number. */
+	uint8_t	major;
+	/* Version Minor number. */
+	uint8_t	minor;
+	/* Version Update number. */
+	uint8_t	update;
+	/* unused. */
+	uint8_t	unused0[4];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_open *
+ ************************/
+
+
+/* hwrm_tf_session_open_input (size:640b/80B) */
+struct hwrm_tf_session_open_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Name of the session. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_open_output (size:128b/16B) */
+struct hwrm_tf_session_open_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**************************
+ * hwrm_tf_session_attach *
+ **************************/
+
+
+/* hwrm_tf_session_attach_input (size:704b/88B) */
+struct hwrm_tf_session_attach_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Unique session identifier for the session that the attach
+	 * request want to attach to. This value originates from the
+	 * shared session memory that the attach request opened by
+	 * way of the 'attach name' that was passed in to the core
+	 * attach API.
+	 * The fw_session_id of the attach session includes PCIe bus
+	 * info to distinguish the PF and session info to identify
+	 * the associated TruFlow session.
+	 */
+	uint32_t	attach_fw_session_id;
+	/* unused. */
+	uint32_t	unused0;
+	/* Name of the session it self. */
+	uint8_t	session_name[64];
+} __attribute__((packed));
+
+/* hwrm_tf_session_attach_output (size:128b/16B) */
+struct hwrm_tf_session_attach_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Unique session identifier for the session created by the
+	 * firmware. It includes PCIe bus info to distinguish the PF
+	 * and session info to identify the associated TruFlow
+	 * session. This fw_session_id is unique to the attach
+	 * request.
+	 */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_session_close *
+ *************************/
+
+
+/* hwrm_tf_session_close_input (size:192b/24B) */
+struct hwrm_tf_session_close_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_close_output (size:128b/16B) */
+struct hwrm_tf_session_close_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_session_qcfg *
+ ************************/
+
+
+/* hwrm_tf_session_qcfg_input (size:192b/24B) */
+struct hwrm_tf_session_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* unused. */
+	uint8_t	unused0[4];
+} __attribute__((packed));
+
+/* hwrm_tf_session_qcfg_output (size:128b/16B) */
+struct hwrm_tf_session_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* RX action control settings flags. */
+	uint8_t	rx_act_flags;
+	/*
+	 * A value of 1 in this field indicates that Global Flow ID
+	 * reporting into cfa_code and cfa_metadata is enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_GFID_EN \
+		UINT32_C(0x1)
+	/*
+	 * A value of 1 in this field indicates that both inner and outer
+	 * are stripped and inner tag is passed.
+	 * Enabled.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_ABCR_VTAG_DLT_BOTH \
+		UINT32_C(0x2)
+	/*
+	 * A value of 1 in this field indicates that the re-use of
+	 * existing tunnel L2 header SMAC is enabled for
+	 * Non-tunnel L2, L2-L3 and IP-IP tunnel.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_RX_ACT_FLAGS_TECT_SMAC_OVR_RUTNSL2 \
+		UINT32_C(0x4)
+	/* TX Action control settings flags. */
+	uint8_t	tx_act_flags;
+	/* Disabled. */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_ABCR_VEB_EN \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1 any GRE tunnels will include the
+	 * optional Key field.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_GRE_SET_K \
+		UINT32_C(0x2)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV6 Traffic Class (TC)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap
+	 * record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV6_TC_IH \
+		UINT32_C(0x4)
+	/*
+	 * When set to 1, for GRE tunnels, the IPV4 Type Of Service (TOS)
+	 * field of the outer header is inherited from the inner header
+	 * (if present) or the fixed value as taken from the encap record.
+	 */
+	#define HWRM_TF_SESSION_QCFG_OUTPUT_TX_ACT_FLAGS_TECT_IPV4_TOS_IH \
+		UINT32_C(0x8)
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_qcaps *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_qcaps_input (size:256b/32B) */
+struct hwrm_tf_session_resc_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided qcaps_addr
+	 * buffer. The size should be set to the Resource Manager
+	 * provided max qcaps value that is device specific. This is
+	 * the max size possible.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the qcaps output data
+	 * array. Array is of tf_rm_cap type and is device specific.
+	 */
+	uint64_t	qcaps_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_qcaps_output (size:192b/24B) */
+struct hwrm_tf_session_resc_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Session reservation strategy. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_MASK \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_SFT \
+		0
+	/* Static partitioning. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC \
+		UINT32_C(0x0)
+	/* Strategy 1. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1 \
+		UINT32_C(0x1)
+	/* Strategy 2. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2 \
+		UINT32_C(0x2)
+	/* Strategy 3. */
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3 \
+		UINT32_C(0x3)
+	#define HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_LAST \
+		HWRM_TF_SESSION_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3
+	/*
+	 * Size of the returned tf_rm_cap data array. The value
+	 * cannot exceed the size defined by the input msg. The data
+	 * array is returned using the qcaps_addr specified DMA
+	 * address also provided by the input msg.
+	 */
+	uint16_t	size;
+	/* unused. */
+	uint16_t	unused0;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_alloc *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_alloc_input (size:256b/32B) */
+struct hwrm_tf_session_resc_alloc_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_ALLOC_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided num_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the num input data array
+	 * buffer. Array is of tf_rm_num type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	num_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_alloc_output (size:128b/16B) */
+struct hwrm_tf_session_resc_alloc_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*****************************
+ * hwrm_tf_session_resc_free *
+ *****************************/
+
+
+/* hwrm_tf_session_resc_free_input (size:256b/32B) */
+struct hwrm_tf_session_resc_free_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided free_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the free input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size field of this message.
+	 */
+	uint64_t	free_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_free_output (size:128b/16B) */
+struct hwrm_tf_session_resc_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/******************************
+ * hwrm_tf_session_resc_flush *
+ ******************************/
+
+
+/* hwrm_tf_session_resc_flush_input (size:256b/32B) */
+struct hwrm_tf_session_resc_flush_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_SESSION_RESC_FLUSH_INPUT_FLAGS_DIR_TX
+	/*
+	 * Defines the size, in bytes, of the provided flush_addr
+	 * buffer.
+	 */
+	uint16_t	size;
+	/*
+	 * This is the DMA address for the flush input data array
+	 * buffer.  Array of tf_rm_res type. Size of the buffer is
+	 * provided by the 'size' field in this message.
+	 */
+	uint64_t	flush_addr;
+} __attribute__((packed));
+
+/* hwrm_tf_session_resc_flush_output (size:128b/16B) */
+struct hwrm_tf_session_resc_flush_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/* TruFlow RM capability of a resource. */
+/* tf_rm_cap (size:64b/8B) */
+struct tf_rm_cap {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Minimum value. */
+	uint16_t	min;
+	/* Maximum value. */
+	uint16_t	max;
+} __attribute__((packed));
+
+/* TruFlow RM number of a resource. */
+/* tf_rm_num (size:64b/8B) */
+struct tf_rm_num {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of resources. */
+	uint32_t	num;
+} __attribute__((packed));
+
+/* TruFlow RM reservation information. */
+/* tf_rm_res (size:64b/8B) */
+struct tf_rm_res {
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Start offset. */
+	uint16_t	start;
+	/* Number of resources. */
+	uint16_t	stride;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_get *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_get_input (size:256b/32B) */
+struct hwrm_tf_tbl_type_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_get_output (size:1216b/152B) */
+struct hwrm_tf_tbl_type_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Response code. */
+	uint32_t	resp_code;
+	/* Response size. */
+	uint16_t	size;
+	/* unused */
+	uint16_t	unused0;
+	/* Response data. */
+	uint8_t	data[128];
+	/* unused */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_tbl_type_set *
+ ************************/
+
+
+/* hwrm_tf_tbl_type_set_input (size:1024b/128B) */
+struct hwrm_tf_tbl_type_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint8_t	unused0[2];
+	/*
+	 * Type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of the type to retrieve. */
+	uint32_t	index;
+	/* Size of the data to set. */
+	uint16_t	size;
+	/* unused */
+	uint8_t	unused1[6];
+	/* Data to be set. */
+	uint8_t	data[88];
+} __attribute__((packed));
+
+/* hwrm_tf_tbl_type_set_output (size:128b/16B) */
+struct hwrm_tf_tbl_type_set_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM. This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field
+	 * is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*************************
+ * hwrm_tf_ctxt_mem_rgtr *
+ *************************/
+
+
+/* hwrm_tf_ctxt_mem_rgtr_input (size:256b/32B) */
+struct hwrm_tf_ctxt_mem_rgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Counter PBL indirect levels. */
+	uint8_t	page_level;
+	/* PBL pointer is physical start address. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_0 UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_1 UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing
+	 * to PTE tables.
+	 */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2 UINT32_C(0x2)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_LEVEL_LVL_2
+	/* Page size. */
+	uint8_t	page_size;
+	/* 4KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K   UINT32_C(0x0)
+	/* 8KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K   UINT32_C(0x1)
+	/* 64KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K  UINT32_C(0x4)
+	/* 256KB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K UINT32_C(0x6)
+	/* 1MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M   UINT32_C(0x8)
+	/* 2MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M   UINT32_C(0x9)
+	/* 4MB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M   UINT32_C(0xa)
+	/* 1GB page size. */
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G   UINT32_C(0x12)
+	#define HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_LAST \
+		HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+	/* unused. */
+	uint32_t	unused0;
+	/* Pointer to the PBL, or PDL depending on number of levels */
+	uint64_t	page_dir;
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_rgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_rgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***************************
+ * hwrm_tf_ctxt_mem_unrgtr *
+ ***************************/
+
+
+/* hwrm_tf_ctxt_mem_unrgtr_input (size:192b/24B) */
+struct hwrm_tf_ctxt_mem_unrgtr_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Id/Handle to the recently register context memory. This
+	 * handle is passed to the TF session.
+	 */
+	uint16_t	ctx_id;
+	/* unused. */
+	uint8_t	unused0[6];
+} __attribute__((packed));
+
+/* hwrm_tf_ctxt_mem_unrgtr_output (size:128b/16B) */
+struct hwrm_tf_ctxt_mem_unrgtr_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/************************
+ * hwrm_tf_ext_em_qcaps *
+ ************************/
+
+
+/* hwrm_tf_ext_em_qcaps_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcaps_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcaps_output (size:320b/40B) */
+struct hwrm_tf_ext_em_qcaps_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * When set to 1, indicates the the FW supports the Centralized
+	 * Memory Model. The concept designates one entity for the
+	 * memory allocation while all others ‘subscribe’ to it.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x1)
+	/*
+	 * When set to 1, indicates the the FW supports the Detached
+	 * Centralized Memory Model. The memory is allocated and managed
+	 * as a separate entity. All PFs and VFs will be granted direct
+	 * or semi-direct access to the allocated memory while none of
+	 * which can interfere with the management of the memory.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_FLAGS_DETACHED_CENTRALIZED_MEMORY_MODEL_SUPPORTED \
+		UINT32_C(0x2)
+	/* unused. */
+	uint32_t	unused0;
+	/* Support flags. */
+	uint32_t	supported;
+	/*
+	 * If set to 1, then EXT EM KEY0 table is supported using
+	 * crc32 hash.
+	 * If set to 0, EXT EM KEY0 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY0_TABLE \
+		UINT32_C(0x1)
+	/*
+	 * If set to 1, then EXT EM KEY1 table is supported using
+	 * lookup3 hash.
+	 * If set to 0, EXT EM KEY1 table is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_KEY1_TABLE \
+		UINT32_C(0x2)
+	/*
+	 * If set to 1, then EXT EM External Record table is supported.
+	 * If set to 0, EXT EM External Record table is not
+	 * supported.  (This table includes action record, EFC
+	 * pointers, encap pointers)
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_RECORD_TABLE \
+		UINT32_C(0x4)
+	/*
+	 * If set to 1, then EXT EM External Flow Counters table is
+	 * supported.
+	 * If set to 0, EXT EM External Flow Counters table is not
+	 * supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_EXTERNAL_FLOW_COUNTERS_TABLE \
+		UINT32_C(0x8)
+	/*
+	 * If set to 1, then FID table used for implicit flow flush
+	 * is supported.
+	 * If set to 0, then FID table used for implicit flow flush
+	 * is not supported.
+	 */
+	#define HWRM_TF_EXT_EM_QCAPS_OUTPUT_SUPPORTED_FID_TABLE \
+		UINT32_C(0x10)
+	/*
+	 * The maximum number of entries supported by EXT EM. When
+	 * configuring the host memory the number of numbers of
+	 * entries that can supported are -
+	 *      32k, 64k 128k, 256k, 512k, 1M, 2M, 4M, 8M, 32M, 64M,
+	 *      128M entries.
+	 * Any value that are not these values, the FW will round
+	 * down to the closest support number of entries.
+	 */
+	uint32_t	max_entries_supported;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM
+	 * KEY0/KEY1 tables.
+	 */
+	uint16_t	key_entry_size;
+	/*
+	 * The entry size in bytes of each entry in the EXT EM RECORD
+	 * tables.
+	 */
+	uint16_t	record_entry_size;
+	/* The entry size in bytes of each entry in the EXT EM EFC tables. */
+	uint16_t	efc_entry_size;
+	/* The FID size in bytes of each entry in the EXT EM FID tables. */
+	uint16_t	fid_entry_size;
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/*********************
+ * hwrm_tf_ext_em_op *
+ *********************/
+
+
+/* hwrm_tf_ext_em_op_input (size:192b/24B) */
+struct hwrm_tf_ext_em_op_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint16_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint16_t	unused0;
+	/* The number of EXT EM key table entries to be configured. */
+	uint16_t	op;
+	/* This value is reserved and should not be used. */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_RESERVED       UINT32_C(0x0)
+	/*
+	 * To properly stop EXT EM and ensure there are no DMA's,
+	 * the caller must disable EXT EM for the given PF, using
+	 * this call. This will safely disable EXT EM and ensure
+	 * that all DMA'ed to the keys/records/efc have been
+	 * completed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE UINT32_C(0x1)
+	/*
+	 * Once the EXT EM host memory has been configured, EXT EM
+	 * options have been configured. Then the caller should
+	 * enable EXT EM for the given PF. Note once this call has
+	 * been made, then the EXT EM mechanism will be active and
+	 * DMA's will occur as packets are processed.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE  UINT32_C(0x2)
+	/*
+	 * Clear EXT EM settings for the given PF so that the
+	 * register values are reset back to their initial state.
+	 */
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP UINT32_C(0x3)
+	#define HWRM_TF_EXT_EM_OP_INPUT_OP_LAST \
+		HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_CLEANUP
+	/* unused. */
+	uint16_t	unused1;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_op_output (size:128b/16B) */
+struct hwrm_tf_ext_em_op_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/**********************
+ * hwrm_tf_ext_em_cfg *
+ **********************/
+
+
+/* hwrm_tf_ext_em_cfg_input (size:384b/48B) */
+struct hwrm_tf_ext_em_cfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* When set to 1, secondary, 0 means primary. */
+	#define HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_SECONDARY_PF \
+		UINT32_C(0x4)
+	/*
+	 * Group_id which used by Firmware to identify memory pools belonging
+	 * to certain group.
+	 */
+	uint16_t	group_id;
+	/*
+	 * Dynamically reconfigure EEM pending cache every 1/10th of second.
+	 * If set to 0 it will disable the EEM HW flush of the pending cache.
+	 */
+	uint8_t	flush_interval;
+	/* unused. */
+	uint8_t	unused0;
+	/*
+	 * Configured EXT EM with the given number of entries. All
+	 * the EXT EM tables KEY0, KEY1, RECORD, EFC all have the
+	 * same number of entries and all tables will be configured
+	 * using this value. Current minimum value is 32k. Current
+	 * maximum value is 128M.
+	 */
+	uint32_t	num_entries;
+	/* unused. */
+	uint32_t	unused1;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint16_t	unused2;
+	/* unused. */
+	uint32_t	unused3;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_cfg_output (size:128b/16B) */
+struct hwrm_tf_ext_em_cfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/***********************
+ * hwrm_tf_ext_em_qcfg *
+ ***********************/
+
+
+/* hwrm_tf_ext_em_qcfg_input (size:192b/24B) */
+struct hwrm_tf_ext_em_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_INPUT_FLAGS_DIR_TX
+	/* unused. */
+	uint32_t	unused0;
+} __attribute__((packed));
+
+/* hwrm_tf_ext_em_qcfg_output (size:256b/32B) */
+struct hwrm_tf_ext_em_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR \
+		UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_RX \
+		UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX \
+		UINT32_C(0x1)
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_LAST \
+		HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_DIR_TX
+	/* When set to 1, all offloaded flows will be sent to EXT EM. */
+	#define HWRM_TF_EXT_EM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
+		UINT32_C(0x2)
+	/* The number of entries the FW has configured for EXT EM. */
+	uint32_t	num_entries;
+	/* Configured EXT EM with the given context if for KEY0 table. */
+	uint16_t	key0_ctx_id;
+	/* Configured EXT EM with the given context if for KEY1 table. */
+	uint16_t	key1_ctx_id;
+	/* Configured EXT EM with the given context if for RECORD table. */
+	uint16_t	record_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	efc_ctx_id;
+	/* Configured EXT EM with the given context if for EFC table. */
+	uint16_t	fid_ctx_id;
+	/* unused. */
+	uint8_t	unused0[5];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __attribute__((packed));
+
+/********************
+ * hwrm_tf_tcam_set *
+ ********************/
+
+
+/* hwrm_tf_tcam_set_input (size:1024b/128B) */
+struct hwrm_tf_tcam_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX
+	/*
+	 * Indicate device data is being sent via DMA, the device
+	 * data is packing does not change.
+	 */
+	#define HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA     UINT32_C(0x2)
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Index of TCAM entry. */
+	uint16_t	idx;
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM result. */
+	uint8_t	result_size;
+	/*
+	 * Offset from which the mask bytes start in the device data
+	 * array, key offset is always 0.
+	 */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[6];
+	/*
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[88];
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_qcfg_output (size:256b/32B) */
-struct hwrm_cfa_eem_qcfg_output {
+/* hwrm_tf_tcam_set_output (size:128b/16B) */
+struct hwrm_tf_tcam_set_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31848,46 +35014,26 @@ struct hwrm_cfa_eem_qcfg_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/* When set to 1, indicates the configuration is the TX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_TX \
-		UINT32_C(0x1)
-	/* When set to 1, indicates the configuration is the RX flow. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PATH_RX \
-		UINT32_C(0x2)
-	/* When set to 1, all offloaded flows will be sent to EEM. */
-	#define HWRM_CFA_EEM_QCFG_OUTPUT_FLAGS_PREFERRED_OFFLOAD \
-		UINT32_C(0x4)
-	/* The number of entries the FW has configured for EEM. */
-	uint32_t	num_entries;
-	/* Configured EEM with the given context if for KEY0 table. */
-	uint16_t	key0_ctx_id;
-	/* Configured EEM with the given context if for KEY1 table. */
-	uint16_t	key1_ctx_id;
-	/* Configured EEM with the given context if for RECORD table. */
-	uint16_t	record_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	efc_ctx_id;
-	/* Configured EEM with the given context if for EFC table. */
-	uint16_t	fid_ctx_id;
-	uint8_t	unused_2[5];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/*******************
- * hwrm_cfa_eem_op *
- *******************/
+/********************
+ * hwrm_tf_tcam_get *
+ ********************/
 
 
-/* hwrm_cfa_eem_op_input (size:192b/24B) */
-struct hwrm_cfa_eem_op_input {
+/* hwrm_tf_tcam_get_input (size:256b/32B) */
+struct hwrm_tf_tcam_get_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -31916,49 +35062,31 @@ struct hwrm_cfa_eem_op_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
 	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_GET_INPUT_FLAGS_DIR_TX
 	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the TX flow offload function specified in fid.
-	 * Note if this bit is set then the path_rx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_TX     UINT32_C(0x1)
-	/*
-	 * When set to 1, indicates the host memory which is passed will be
-	 * used for the RX flow offload function specified in fid.
-	 * Note if this bit is set then the path_tx bit can't be set.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_FLAGS_PATH_RX     UINT32_C(0x2)
-	uint16_t	unused_0;
-	/* The number of EEM key table entries to be configured. */
-	uint16_t	op;
-	/* This value is reserved and should not be used. */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_RESERVED    UINT32_C(0x0)
-	/*
-	 * To properly stop EEM and ensure there are no DMA's, the caller
-	 * must disable EEM for the given PF, using this call. This will
-	 * safely disable EEM and ensure that all DMA'ed to the
-	 * keys/records/efc have been completed.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_DISABLE UINT32_C(0x1)
-	/*
-	 * Once the EEM host memory has been configured, EEM options have
-	 * been configured. Then the caller should enable EEM for the given
-	 * PF. Note once this call has been made, then the EEM mechanism
-	 * will be active and DMA's will occur as packets are processed.
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
 	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_ENABLE  UINT32_C(0x2)
-	/*
-	 * Clear EEM settings for the given PF so that the register values
-	 * are reset back to there initial state.
-	 */
-	#define HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP UINT32_C(0x3)
-	#define HWRM_CFA_EEM_OP_INPUT_OP_LAST \
-		HWRM_CFA_EEM_OP_INPUT_OP_EEM_CLEANUP
+	uint32_t	type;
+	/* Index of a TCAM entry. */
+	uint16_t	idx;
+	/* unused. */
+	uint16_t	unused0;
 } __attribute__((packed));
 
-/* hwrm_cfa_eem_op_output (size:128b/16B) */
-struct hwrm_cfa_eem_op_output {
+/* hwrm_tf_tcam_get_output (size:2368b/296B) */
+struct hwrm_tf_tcam_get_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -31967,24 +35095,41 @@ struct hwrm_cfa_eem_op_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint8_t	unused_0[7];
+	/* Number of bytes in the TCAM key. */
+	uint8_t	key_size;
+	/* Number of bytes in the TCAM entry. */
+	uint8_t	result_size;
+	/* Offset from which the mask bytes start in the device data array. */
+	uint8_t	mask_offset;
+	/* Offset from which the result bytes start in the device data array. */
+	uint8_t	result_offset;
+	/* unused. */
+	uint8_t	unused0[4];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * TCAM key located at offset 0, mask located at mask_offsec
+	 * and result at result_offsec for the device.
+	 */
+	uint8_t	dev_data[272];
+	/* unused. */
+	uint8_t	unused1[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/********************************
- * hwrm_cfa_adv_flow_mgnt_qcaps *
- ********************************/
+/*********************
+ * hwrm_tf_tcam_move *
+ *********************/
 
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_input (size:256b/32B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
+/* hwrm_tf_tcam_move_input (size:1024b/128B) */
+struct hwrm_tf_tcam_move_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32013,11 +35158,33 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	uint32_t	unused_0[4];
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_MOVE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index pairs to be swapped for the device. */
+	uint16_t	count;
+	/* unused. */
+	uint16_t	unused0;
+	/* TCAM index pairs to be swapped for the device. */
+	uint16_t	idx_pairs[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_adv_flow_mgnt_qcaps_output (size:128b/16B) */
-struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
+/* hwrm_tf_tcam_move_output (size:128b/16B) */
+struct hwrm_tf_tcam_move_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32026,131 +35193,26 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	uint32_t	flags;
-	/*
-	 * Value of 1 to indicate firmware support 16-bit flow handle.
-	 * Value of 0 to indicate firmware not support 16-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_16BIT_SUPPORTED \
-		UINT32_C(0x1)
-	/*
-	 * Value of 1 to indicate firmware support 64-bit flow handle.
-	 * Value of 0 to indicate firmware not support 64-bit flow handle.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_HND_64BIT_SUPPORTED \
-		UINT32_C(0x2)
-	/*
-	 * Value of 1 to indicate firmware support flow batch delete operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 to indicate that the firmware does not support flow batch delete
-	 * operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_BATCH_DELETE_SUPPORTED \
-		UINT32_C(0x4)
-	/*
-	 * Value of 1 to indicate that the firmware support flow reset all operation through
-	 * HWRM_CFA_FLOW_FLUSH command.
-	 * Value of 0 indicates firmware does not support flow reset all operation.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_RESET_ALL_SUPPORTED \
-		UINT32_C(0x8)
-	/*
-	 * Value of 1 to indicate that firmware supports use of FID as dest_id in
-	 * HWRM_CFA_NTUPLE_ALLOC/CFG commands.
-	 * Value of 0 indicates firmware does not support use of FID as dest_id.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_DEST_FUNC_SUPPORTED \
-		UINT32_C(0x10)
-	/*
-	 * Value of 1 to indicate that firmware supports TX EEM flows.
-	 * Value of 0 indicates firmware does not support TX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_TX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x20)
-	/*
-	 * Value of 1 to indicate that firmware supports RX EEM flows.
-	 * Value of 0 indicates firmware does not support RX EEM flows.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RX_EEM_FLOW_SUPPORTED \
-		UINT32_C(0x40)
-	/*
-	 * Value of 1 to indicate that firmware supports the dynamic allocation of an
-	 * on-chip flow counter which can be used for EEM flows.
-	 * Value of 0 indicates firmware does not support the dynamic allocation of an
-	 * on-chip flow counter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_FLOW_COUNTER_ALLOC_SUPPORTED \
-		UINT32_C(0x80)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in HWRM_CFA_NTUPLE_ALLOC command.
-	 * Value of 0 indicates firmware does not support rfs_ring_tbl_idx.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_SUPPORTED \
-		UINT32_C(0x100)
-	/*
-	 * Value of 1 to indicate that firmware supports untagged matching
-	 * criteria on HWRM_CFA_L2_FILTER_ALLOC command. Value of 0
-	 * indicates firmware does not support untagged matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_UNTAGGED_VLAN_SUPPORTED \
-		UINT32_C(0x200)
-	/*
-	 * Value of 1 to indicate that firmware supports XDP filter. Value
-	 * of 0 indicates firmware does not support XDP filter.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_XDP_SUPPORTED \
-		UINT32_C(0x400)
-	/*
-	 * Value of 1 to indicate that the firmware support L2 header source
-	 * fields matching criteria on HWRM_CFA_L2_FILTER_ALLOC command.
-	 * Value of 0 indicates firmware does not support L2 header source
-	 * fields matching.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED \
-		UINT32_C(0x800)
-	/*
-	 * If set to 1, firmware is capable of supporting ARP ethertype as
-	 * matching criteria for HWRM_CFA_NTUPLE_FILTER_ALLOC command on the
-	 * RX direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ARP_SUPPORTED \
-		UINT32_C(0x1000)
-	/*
-	 * Value of 1 to indicate that firmware supports setting of
-	 * rfs_ring_tbl_idx in dst_id field of the HWRM_CFA_NTUPLE_ALLOC
-	 * command. Value of 0 indicates firmware does not support
-	 * rfs_ring_tbl_idx in dst_id field.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V2_SUPPORTED \
-		UINT32_C(0x2000)
-	/*
-	 * If set to 1, firmware is capable of supporting IPv4/IPv6 as
-	 * ethertype in HWRM_CFA_NTUPLE_FILTER_ALLOC command on the RX
-	 * direction. By default, this flag should be 0 for older version
-	 * of firmware.
-	 */
-	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_ETHERTYPE_IP_SUPPORTED \
-		UINT32_C(0x4000)
-	uint8_t	unused_0[3];
+	/* unused. */
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
 
-/******************
- * hwrm_cfa_tflib *
- ******************/
+/*********************
+ * hwrm_tf_tcam_free *
+ *********************/
 
 
-/* hwrm_cfa_tflib_input (size:1024b/128B) */
-struct hwrm_cfa_tflib_input {
+/* hwrm_tf_tcam_free_input (size:1024b/128B) */
+struct hwrm_tf_tcam_free_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
 	/*
@@ -32179,18 +35241,33 @@ struct hwrm_cfa_tflib_input {
 	 * point to a physically contiguous block of memory.
 	 */
 	uint64_t	resp_addr;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX
+	/*
+	 * TCAM type of the resource, defined globally in the
+	 * hwrm_tf_resc_type enum.
+	 */
+	uint32_t	type;
+	/* Number of TCAM index to be deleted for the device. */
+	uint16_t	count;
 	/* unused. */
-	uint8_t	unused0[4];
-	/* TFLIB request data. */
-	uint32_t	tf_req[26];
+	uint16_t	unused0;
+	/* TCAM index list to be deleted for the device. */
+	uint16_t	idx_list[48];
 } __attribute__((packed));
 
-/* hwrm_cfa_tflib_output (size:5632b/704B) */
-struct hwrm_cfa_tflib_output {
+/* hwrm_tf_tcam_free_output (size:128b/16B) */
+struct hwrm_tf_tcam_free_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
 	/* The HWRM command request type. */
@@ -32199,22 +35276,15 @@ struct hwrm_cfa_tflib_output {
 	uint16_t	seq_id;
 	/* The length of the response data in number of bytes. */
 	uint16_t	resp_len;
-	/* TFLIB message type. */
-	uint16_t	tf_type;
-	/* TFLIB message subtype. */
-	uint16_t	tf_subtype;
-	/* TFLIB response code */
-	uint32_t	tf_resp_code;
-	/* TFLIB response data. */
-	uint32_t	tf_resp[170];
 	/* unused. */
-	uint8_t	unused1[7];
+	uint8_t	unused0[7];
 	/*
-	 * This field is used in Output records to indicate that the output
-	 * is completely written to RAM. This field should be read as '1'
-	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
 	 */
 	uint8_t	valid;
 } __attribute__((packed));
@@ -33155,9 +36225,9 @@ struct pcie_ctx_hw_stats {
 	uint64_t	pcie_tl_signal_integrity;
 	/* Number of times LTSSM entered Recovery state */
 	uint64_t	pcie_link_integrity;
-	/* Number of TLP bytes that have been transmitted */
+	/* Report number of TLP bits that have been transmitted in Mbps */
 	uint64_t	pcie_tx_traffic_rate;
-	/* Number of TLP bytes that have been received */
+	/* Report number of TLP bits that have been received in Mbps */
 	uint64_t	pcie_rx_traffic_rate;
 	/* Number of DLLP bytes that have been transmitted */
 	uint64_t	pcie_tx_dllp_statistics;
@@ -33981,7 +37051,23 @@ struct hwrm_nvm_modify_input {
 	uint64_t	host_src_addr;
 	/* 16-bit directory entry index. */
 	uint16_t	dir_idx;
-	uint8_t	unused_0[2];
+	uint16_t	flags;
+	/*
+	 * This flag indicates the sender wants to modify a continuous NVRAM
+	 * area using a batch of this HWRM requests. The offset of a request
+	 * must be continuous to the end of previous request's. Firmware does
+	 * not update the directory entry until receiving the last request,
+	 * which is indicated by the batch_last flag.
+	 * This flag is set usually when a sender does not have a block of
+	 * memory that is big enough to hold the entire NVRAM data for send
+	 * at one time.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_MODE     UINT32_C(0x1)
+	/*
+	 * This flag can be used only when the batch_mode flag is set.
+	 * It indicates this request is the last of batch requests.
+	 */
+	#define HWRM_NVM_MODIFY_INPUT_FLAGS_BATCH_LAST     UINT32_C(0x2)
 	/* 32-bit NVRAM byte-offset to modify content from. */
 	uint32_t	offset;
 	/*
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 02/34] net/bnxt: update hwrm prep to use ptr
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
                         ` (34 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Randy Schacher

From: Randy Schacher <stuart.schacher@broadcom.com>

- Change HWRM_PREP to use pointer and use the full
  HWRM enum

Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |   2 +-
 drivers/net/bnxt/bnxt_hwrm.c | 202 ++++++++++++++++++++++---------------------
 2 files changed, 103 insertions(+), 101 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3ae08a2..b795ed6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -594,7 +594,7 @@ struct bnxt {
 
 	uint8_t			mac_addr[RTE_ETHER_ADDR_LEN];
 
-	uint16_t			hwrm_cmd_seq;
+	uint16_t			chimp_cmd_seq;
 	uint16_t			kong_cmd_seq;
 	void				*hwrm_cmd_resp_addr;
 	rte_iova_t			hwrm_cmd_resp_dma_addr;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index a9c9c72..93b2ea7 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -182,19 +182,19 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
  *
  * HWRM_UNLOCK() must be called after all response processing is completed.
  */
-#define HWRM_PREP(req, type, kong) do { \
+#define HWRM_PREP(req, type, kong) do {	\
 	rte_spinlock_lock(&bp->hwrm_lock); \
 	if (bp->hwrm_cmd_resp_addr == NULL) { \
 		rte_spinlock_unlock(&bp->hwrm_lock); \
 		return -EACCES; \
 	} \
 	memset(bp->hwrm_cmd_resp_addr, 0, bp->max_resp_len); \
-	req.req_type = rte_cpu_to_le_16(HWRM_##type); \
-	req.cmpl_ring = rte_cpu_to_le_16(-1); \
-	req.seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
-		rte_cpu_to_le_16(bp->hwrm_cmd_seq++); \
-	req.target_id = rte_cpu_to_le_16(0xffff); \
-	req.resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
+	(req)->req_type = rte_cpu_to_le_16(type); \
+	(req)->cmpl_ring = rte_cpu_to_le_16(-1); \
+	(req)->seq_id = kong ? rte_cpu_to_le_16(bp->kong_cmd_seq++) :\
+		rte_cpu_to_le_16(bp->chimp_cmd_seq++); \
+	(req)->target_id = rte_cpu_to_le_16(0xffff); \
+	(req)->resp_addr = rte_cpu_to_le_64(bp->hwrm_cmd_resp_dma_addr); \
 } while (0)
 
 #define HWRM_CHECK_RESULT_SILENT() do {\
@@ -263,7 +263,7 @@ int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_cfa_l2_set_rx_mask_input req = {.req_type = 0 };
 	struct hwrm_cfa_l2_set_rx_mask_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.mask = 0;
 
@@ -288,7 +288,7 @@ int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp,
 	if (vnic->fw_vnic_id == INVALID_HW_RING_ID)
 		return rc;
 
-	HWRM_PREP(req, CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_SET_RX_MASK, BNXT_USE_CHIMP_MB);
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
 	if (vnic->flags & BNXT_VNIC_INFO_BCAST)
@@ -347,7 +347,7 @@ int bnxt_hwrm_cfa_vlan_antispoof_cfg(struct bnxt *bp, uint16_t fid,
 				return 0;
 		}
 	}
-	HWRM_PREP(req, CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_VLAN_ANTISPOOF_CFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(fid);
 
 	req.vlan_tag_mask_tbl_addr =
@@ -389,7 +389,7 @@ int bnxt_hwrm_clear_l2_filter(struct bnxt *bp,
 	if (l2_filter->l2_ref_cnt > 0)
 		return 0;
 
-	HWRM_PREP(req, CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.l2_filter_id = rte_cpu_to_le_64(filter->fw_l2_filter_id);
 
@@ -440,7 +440,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	if (filter->fw_l2_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_l2_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_L2_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -503,7 +503,7 @@ int bnxt_hwrm_ptp_cfg(struct bnxt *bp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (ptp->rx_filter)
 		flags |= HWRM_PORT_MAC_CFG_INPUT_FLAGS_PTP_RX_TS_CAPTURE_ENABLE;
@@ -536,7 +536,7 @@ static int bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
 	if (ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_MAC_PTP_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(bp->pf.port_id);
 
@@ -591,7 +591,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	uint32_t flags;
 	int i;
 
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 
@@ -721,7 +721,7 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp)
 	struct hwrm_vnic_qcaps_input req = {.req_type = 0 };
 	struct hwrm_vnic_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCAPS, BNXT_USE_CHIMP_MB);
 
 	req.target_id = rte_cpu_to_le_16(0xffff);
 
@@ -748,7 +748,7 @@ int bnxt_hwrm_func_reset(struct bnxt *bp)
 	struct hwrm_func_reset_input req = {.req_type = 0 };
 	struct hwrm_func_reset_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_RESET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESET, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(0);
 
@@ -781,7 +781,7 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 	if ((BNXT_PF(bp) || BNXT_VF_IS_TRUSTED(bp)) && !BNXT_STINGRAY(bp))
 		flags |= HWRM_FUNC_DRV_RGTR_INPUT_FLAGS_MASTER_SUPPORT;
 
-	HWRM_PREP(req, FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_RGTR, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_VER |
 			HWRM_FUNC_DRV_RGTR_INPUT_ENABLES_ASYNC_EVENT_FWD);
 	req.ver_maj = RTE_VER_YEAR;
@@ -853,7 +853,7 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test)
 	struct hwrm_func_vf_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_vf_cfg_input req = {0};
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	enables = HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_RX_RINGS  |
 		  HWRM_FUNC_VF_CFG_INPUT_ENABLES_NUM_TX_RINGS   |
@@ -919,7 +919,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 	struct hwrm_func_resource_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
 	struct hwrm_func_resource_qcaps_input req = {0};
 
-	HWRM_PREP(req, FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_RESOURCE_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -964,7 +964,7 @@ int bnxt_hwrm_ver_get(struct bnxt *bp, uint32_t timeout)
 
 	bp->max_req_len = HWRM_MAX_REQ_LEN;
 	bp->hwrm_cmd_timeout = timeout;
-	HWRM_PREP(req, VER_GET, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VER_GET, BNXT_USE_CHIMP_MB);
 
 	req.hwrm_intf_maj = HWRM_VERSION_MAJOR;
 	req.hwrm_intf_min = HWRM_VERSION_MINOR;
@@ -1104,7 +1104,7 @@ int bnxt_hwrm_func_driver_unregister(struct bnxt *bp, uint32_t flags)
 	if (!(bp->flags & BNXT_FLAG_REGISTERED))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_UNRGTR, BNXT_USE_CHIMP_MB);
 	req.flags = flags;
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -1122,7 +1122,7 @@ static int bnxt_hwrm_port_phy_cfg(struct bnxt *bp, struct bnxt_link_info *conf)
 	struct hwrm_port_phy_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint32_t enables = 0;
 
-	HWRM_PREP(req, PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_CFG, BNXT_USE_CHIMP_MB);
 
 	if (conf->link_up) {
 		/* Setting Fixed Speed. But AutoNeg is ON, So disable it */
@@ -1186,7 +1186,7 @@ static int bnxt_hwrm_port_phy_qcfg(struct bnxt *bp,
 	struct hwrm_port_phy_qcfg_input req = {0};
 	struct hwrm_port_phy_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_PHY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -1265,7 +1265,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	int i;
 
 get_rx_info:
-	HWRM_PREP(req, QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_QUEUE_QPORTCFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(dir);
 	/* HWRM Version >= 1.9.1 only if COS Classification is not required. */
@@ -1353,7 +1353,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 	struct rte_mempool *mb_pool;
 	uint16_t rx_buf_size;
 
-	HWRM_PREP(req, RING_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.page_tbl_addr = rte_cpu_to_le_64(ring->bd_dma);
 	req.fbo = rte_cpu_to_le_32(0);
@@ -1477,7 +1477,7 @@ int bnxt_hwrm_ring_free(struct bnxt *bp,
 	struct hwrm_ring_free_input req = {.req_type = 0 };
 	struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_type = ring_type;
 	req.ring_id = rte_cpu_to_le_16(ring->fw_ring_id);
@@ -1525,7 +1525,7 @@ int bnxt_hwrm_ring_grp_alloc(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_alloc_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.cr = rte_cpu_to_le_16(bp->grp_info[idx].cp_fw_ring_id);
 	req.rr = rte_cpu_to_le_16(bp->grp_info[idx].rx_fw_ring_id);
@@ -1549,7 +1549,7 @@ int bnxt_hwrm_ring_grp_free(struct bnxt *bp, unsigned int idx)
 	struct hwrm_ring_grp_free_input req = {.req_type = 0 };
 	struct hwrm_ring_grp_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, RING_GRP_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_GRP_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ring_group_id = rte_cpu_to_le_16(bp->grp_info[idx].fw_grp_id);
 
@@ -1571,7 +1571,7 @@ int bnxt_hwrm_stat_clear(struct bnxt *bp, struct bnxt_cp_ring_info *cpr)
 	if (cpr->hw_stats_ctx_id == (uint32_t)HWRM_NA_SIGNATURE)
 		return rc;
 
-	HWRM_PREP(req, STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1590,7 +1590,7 @@ int bnxt_hwrm_stat_ctx_alloc(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_alloc_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.update_period_ms = rte_cpu_to_le_32(0);
 
@@ -1614,7 +1614,7 @@ int bnxt_hwrm_stat_ctx_free(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	struct hwrm_stat_ctx_free_input req = {.req_type = 0 };
 	struct hwrm_stat_ctx_free_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cpr->hw_stats_ctx_id);
 
@@ -1648,7 +1648,7 @@ int bnxt_hwrm_vnic_alloc(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 skip_ring_grps:
 	vnic->mru = BNXT_VNIC_MRU(bp->eth_dev->data->mtu);
-	HWRM_PREP(req, VNIC_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_ALLOC, BNXT_USE_CHIMP_MB);
 
 	if (vnic->func_default)
 		req.flags =
@@ -1671,7 +1671,7 @@ static int bnxt_hwrm_vnic_plcmodes_qcfg(struct bnxt *bp,
 	struct hwrm_vnic_plcmodes_qcfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1704,7 +1704,7 @@ static int bnxt_hwrm_vnic_plcmodes_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 	req.flags = rte_cpu_to_le_32(pmode->flags);
@@ -1743,7 +1743,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	if (rc)
 		return rc;
 
-	HWRM_PREP(req, VNIC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (BNXT_CHIP_THOR(bp)) {
 		int dflt_rxq = vnic->start_grp_id;
@@ -1847,7 +1847,7 @@ int bnxt_hwrm_vnic_qcfg(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		PMD_DRV_LOG(DEBUG, "VNIC QCFG ID %d\n", vnic->fw_vnic_id);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_VNIC_QCFG_INPUT_ENABLES_VF_ID_VALID);
@@ -1890,7 +1890,7 @@ int bnxt_hwrm_vnic_ctx_alloc(struct bnxt *bp,
 	struct hwrm_vnic_rss_cos_lb_ctx_alloc_output *resp =
 						bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_ALLOC, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -1919,7 +1919,7 @@ int _bnxt_hwrm_vnic_ctx_free(struct bnxt *bp,
 		PMD_DRV_LOG(DEBUG, "VNIC RSS Rule %x\n", vnic->rss_rule);
 		return rc;
 	}
-	HWRM_PREP(req, VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_COS_LB_CTX_FREE, BNXT_USE_CHIMP_MB);
 
 	req.rss_cos_lb_ctx_id = rte_cpu_to_le_16(ctx_idx);
 
@@ -1964,7 +1964,7 @@ int bnxt_hwrm_vnic_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_FREE, BNXT_USE_CHIMP_MB);
 
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -1991,7 +1991,7 @@ bnxt_hwrm_vnic_rss_cfg_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	struct hwrm_vnic_rss_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
 	for (i = 0; i < nr_ctxs; i++) {
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -2029,7 +2029,7 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 	if (BNXT_CHIP_THOR(bp))
 		return bnxt_hwrm_vnic_rss_cfg_thor(bp, vnic);
 
-	HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 	req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
 	req.hash_mode_flags = vnic->hash_mode;
@@ -2062,7 +2062,7 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 		return rc;
 	}
 
-	HWRM_PREP(req, VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(
 			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT);
@@ -2103,7 +2103,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_VNIC_TPA_CFG, BNXT_USE_CHIMP_MB);
 
 	if (enable) {
 		req.enables = rte_cpu_to_le_32(
@@ -2143,7 +2143,7 @@ int bnxt_hwrm_func_vf_mac(struct bnxt *bp, uint16_t vf, const uint8_t *mac_addr)
 	memcpy(req.dflt_mac_addr, mac_addr, sizeof(req.dflt_mac_addr));
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -2161,7 +2161,7 @@ int bnxt_hwrm_func_qstats_tx_drop(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2184,7 +2184,7 @@ int bnxt_hwrm_func_qstats(struct bnxt *bp, uint16_t fid,
 	struct hwrm_func_qstats_input req = {.req_type = 0};
 	struct hwrm_func_qstats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2221,7 +2221,7 @@ int bnxt_hwrm_func_clr_stats(struct bnxt *bp, uint16_t fid)
 	struct hwrm_func_clr_stats_input req = {.req_type = 0};
 	struct hwrm_func_clr_stats_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(fid);
 
@@ -2928,7 +2928,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	uint16_t flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3037,7 +3037,7 @@ static int bnxt_hwrm_pf_func_cfg(struct bnxt *bp, int tx_rings)
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(enables);
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3109,7 +3109,7 @@ static int reserve_resources_from_vf(struct bnxt *bp,
 	int rc;
 
 	/* Get the actual allocated values now */
-	HWRM_PREP(req, FUNC_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCAPS, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3147,7 +3147,7 @@ int bnxt_hwrm_func_qcfg_current_vf_vlan(struct bnxt *bp, int vf)
 	int rc;
 
 	/* Check for zero MAC address */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3165,7 +3165,7 @@ static int update_pf_resource_max(struct bnxt *bp)
 	int rc;
 
 	/* And copy the allocated numbers into the pf struct */
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3268,7 +3268,7 @@ int bnxt_hwrm_allocate_vfs(struct bnxt *bp, int num_vfs)
 	for (i = 0; i < num_vfs; i++) {
 		add_random_mac_if_needed(bp, &req, i);
 
-		HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 		req.flags = rte_cpu_to_le_32(bp->pf.vf_info[i].func_cfg_flags);
 		req.fid = rte_cpu_to_le_16(bp->pf.vf_info[i].fid);
 		rc = bnxt_hwrm_send_message(bp,
@@ -3324,7 +3324,7 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.enables = rte_cpu_to_le_32(HWRM_FUNC_CFG_INPUT_ENABLES_EVB_MODE);
@@ -3344,7 +3344,7 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_alloc_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_val = port;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3375,7 +3375,7 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port,
 	struct hwrm_tunnel_dst_port_free_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_TUNNEL_DST_PORT_FREE, BNXT_USE_CHIMP_MB);
 
 	req.tunnel_type = tunnel_type;
 	req.tunnel_dst_port_id = rte_cpu_to_be_16(port);
@@ -3394,7 +3394,7 @@ int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.flags = rte_cpu_to_le_32(flags);
@@ -3424,7 +3424,7 @@ int bnxt_hwrm_func_buf_rgtr(struct bnxt *bp)
 	struct hwrm_func_buf_rgtr_input req = {.req_type = 0 };
 	struct hwrm_func_buf_rgtr_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_RGTR, BNXT_USE_CHIMP_MB);
 
 	req.req_buf_num_pages = rte_cpu_to_le_16(1);
 	req.req_buf_page_size = rte_cpu_to_le_16(
@@ -3455,7 +3455,7 @@ int bnxt_hwrm_func_buf_unrgtr(struct bnxt *bp)
 	if (!(BNXT_PF(bp) && bp->pdev->max_vfs))
 		return 0;
 
-	HWRM_PREP(req, FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BUF_UNRGTR, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3471,7 +3471,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(0xffff);
 	req.flags = rte_cpu_to_le_32(bp->pf.func_cfg_flags);
@@ -3493,7 +3493,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp)
 	struct hwrm_func_vf_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables = rte_cpu_to_le_32(
 			HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR);
@@ -3515,7 +3515,7 @@ int bnxt_hwrm_set_default_vlan(struct bnxt *bp, int vf, uint8_t is_vf)
 	uint32_t func_cfg_flags;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	if (is_vf) {
 		dflt_vlan = bp->pf.vf_info[vf].dflt_vlan;
@@ -3547,7 +3547,7 @@ int bnxt_hwrm_func_bw_cfg(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(enables);
@@ -3567,7 +3567,7 @@ int bnxt_hwrm_set_vf_vlan(struct bnxt *bp, int vf)
 	struct hwrm_func_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(bp->pf.vf_info[vf].func_cfg_flags);
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
@@ -3604,7 +3604,7 @@ int bnxt_hwrm_reject_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_REJECT_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3624,7 +3624,7 @@ int bnxt_hwrm_func_qcfg_vf_default_mac(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	int rc;
 
-	HWRM_PREP(req, FUNC_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3648,7 +3648,7 @@ int bnxt_hwrm_exec_fwd_resp(struct bnxt *bp, uint16_t target_id,
 	if (ec_size > sizeof(req.encap_request))
 		return -1;
 
-	HWRM_PREP(req, EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_EXEC_FWD_RESP, BNXT_USE_CHIMP_MB);
 
 	req.encap_resp_target_id = rte_cpu_to_le_16(target_id);
 	memcpy(req.encap_request, encaped, ec_size);
@@ -3668,7 +3668,7 @@ int bnxt_hwrm_ctx_qstats(struct bnxt *bp, uint32_t cid, int idx,
 	struct hwrm_stat_ctx_query_input req = {.req_type = 0};
 	struct hwrm_stat_ctx_query_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_STAT_CTX_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.stat_ctx_id = rte_cpu_to_le_32(cid);
 
@@ -3706,7 +3706,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp)
 	struct bnxt_pf_info *pf = &bp->pf;
 	int rc;
 
-	HWRM_PREP(req, PORT_QSTATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	req.tx_stat_host_addr = rte_cpu_to_le_64(bp->hw_tx_port_stats_map);
@@ -3731,7 +3731,7 @@ int bnxt_hwrm_port_clr_stats(struct bnxt *bp)
 	    BNXT_NPAR(bp) || BNXT_MH(bp) || BNXT_TOTAL_VFS(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_CLR_STATS, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -3751,7 +3751,7 @@ int bnxt_hwrm_port_led_qcaps(struct bnxt *bp)
 	if (BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_QCAPS, BNXT_USE_CHIMP_MB);
 	req.port_id = bp->pf.port_id;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3793,7 +3793,7 @@ int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on)
 	if (!bp->num_leds || BNXT_VF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, PORT_LED_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_LED_CFG, BNXT_USE_CHIMP_MB);
 
 	if (led_on) {
 		led_state = HWRM_PORT_LED_CFG_INPUT_LED0_STATE_BLINKALT;
@@ -3826,7 +3826,7 @@ int bnxt_hwrm_nvm_get_dir_info(struct bnxt *bp, uint32_t *entries,
 	struct hwrm_nvm_get_dir_info_input req = {0};
 	struct hwrm_nvm_get_dir_info_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_INFO, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3869,7 +3869,7 @@ int bnxt_get_nvram_directory(struct bnxt *bp, uint32_t len, uint8_t *data)
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_GET_DIR_ENTRIES, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -3903,7 +3903,7 @@ int bnxt_hwrm_get_nvram_item(struct bnxt *bp, uint32_t index,
 			"unable to map response address to physical memory\n");
 		return -ENOMEM;
 	}
-	HWRM_PREP(req, NVM_READ, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_READ, BNXT_USE_CHIMP_MB);
 	req.host_dest_addr = rte_cpu_to_le_64(dma_handle);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	req.offset = rte_cpu_to_le_32(offset);
@@ -3925,7 +3925,7 @@ int bnxt_hwrm_erase_nvram_directory(struct bnxt *bp, uint8_t index)
 	struct hwrm_nvm_erase_dir_entry_input req = {0};
 	struct hwrm_nvm_erase_dir_entry_output *resp = bp->hwrm_cmd_resp_addr;
 
-	HWRM_PREP(req, NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_ERASE_DIR_ENTRY, BNXT_USE_CHIMP_MB);
 	req.dir_idx = rte_cpu_to_le_16(index);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -3958,7 +3958,7 @@ int bnxt_hwrm_flash_nvram(struct bnxt *bp, uint16_t dir_type,
 	}
 	memcpy(buf, data, data_len);
 
-	HWRM_PREP(req, NVM_WRITE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_NVM_WRITE, BNXT_USE_CHIMP_MB);
 
 	req.dir_type = rte_cpu_to_le_16(dir_type);
 	req.dir_ordinal = rte_cpu_to_le_16(dir_ordinal);
@@ -4009,7 +4009,7 @@ static int bnxt_hwrm_func_vf_vnic_query(struct bnxt *bp, uint16_t vf,
 	int rc;
 
 	/* First query all VNIC ids */
-	HWRM_PREP(req, FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_VNIC_IDS_QUERY, BNXT_USE_CHIMP_MB);
 
 	req.vf_id = rte_cpu_to_le_16(bp->pf.first_vf_id + vf);
 	req.max_vnic_id_cnt = rte_cpu_to_le_32(bp->pf.total_vnics);
@@ -4091,7 +4091,7 @@ int bnxt_hwrm_func_cfg_vf_set_vlan_anti_spoof(struct bnxt *bp, uint16_t vf,
 	struct hwrm_func_cfg_input req = {0};
 	int rc;
 
-	HWRM_PREP(req, FUNC_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_CFG, BNXT_USE_CHIMP_MB);
 
 	req.fid = rte_cpu_to_le_16(bp->pf.vf_info[vf].fid);
 	req.enables |= rte_cpu_to_le_32(
@@ -4166,7 +4166,7 @@ int bnxt_hwrm_set_em_filter(struct bnxt *bp,
 	if (filter->fw_em_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_em_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_ALLOC, BNXT_USE_KONG(bp));
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4238,7 +4238,7 @@ int bnxt_hwrm_clear_em_filter(struct bnxt *bp, struct bnxt_filter_info *filter)
 	if (filter->fw_em_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_EM_FLOW_FREE, BNXT_USE_KONG(bp));
 
 	req.em_filter_id = rte_cpu_to_le_64(filter->fw_em_filter_id);
 
@@ -4266,7 +4266,7 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id != UINT64_MAX)
 		bnxt_hwrm_clear_ntuple_filter(bp, filter);
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_ALLOC, BNXT_USE_CHIMP_MB);
 
 	req.flags = rte_cpu_to_le_32(filter->flags);
 
@@ -4346,7 +4346,7 @@ int bnxt_hwrm_clear_ntuple_filter(struct bnxt *bp,
 	if (filter->fw_ntuple_filter_id == UINT64_MAX)
 		return 0;
 
-	HWRM_PREP(req, CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_NTUPLE_FILTER_FREE, BNXT_USE_CHIMP_MB);
 
 	req.ntuple_filter_id = rte_cpu_to_le_64(filter->fw_ntuple_filter_id);
 
@@ -4377,7 +4377,7 @@ bnxt_vnic_rss_configure_thor(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 		struct bnxt_rx_ring_info *rxr;
 		struct bnxt_cp_ring_info *cpr;
 
-		HWRM_PREP(req, VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
+		HWRM_PREP(&req, HWRM_VNIC_RSS_CFG, BNXT_USE_CHIMP_MB);
 
 		req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 		req.hash_type = rte_cpu_to_le_32(vnic->hash_type);
@@ -4509,7 +4509,7 @@ static int bnxt_hwrm_set_coal_params_thor(struct bnxt *bp,
 	uint16_t flags;
 	int rc;
 
-	HWRM_PREP(req, RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_RING_AGGINT_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
 
@@ -4546,7 +4546,9 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 		return 0;
 	}
 
-	HWRM_PREP(req, RING_CMPL_RING_CFG_AGGINT_PARAMS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req,
+		  HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS,
+		  BNXT_USE_CHIMP_MB);
 	req.ring_id = rte_cpu_to_le_16(ring_id);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4571,7 +4573,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 	    bp->ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT_SILENT();
 
@@ -4650,7 +4652,7 @@ int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables)
 	if (!ctx)
 		return 0;
 
-	HWRM_PREP(req, FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG, BNXT_USE_CHIMP_MB);
 	req.enables = rte_cpu_to_le_32(enables);
 
 	if (enables & HWRM_FUNC_BACKING_STORE_CFG_INPUT_ENABLES_QP) {
@@ -4743,7 +4745,7 @@ int bnxt_hwrm_ext_port_qstats(struct bnxt *bp)
 	      bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS))
 		return 0;
 
-	HWRM_PREP(req, PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_QSTATS_EXT, BNXT_USE_CHIMP_MB);
 
 	req.port_id = rte_cpu_to_le_16(pf->port_id);
 	if (bp->flags & BNXT_FLAG_EXT_TX_PORT_STATS) {
@@ -4784,7 +4786,7 @@ bnxt_hwrm_tunnel_redirect(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_ALLOC, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4803,7 +4805,7 @@ bnxt_hwrm_tunnel_redirect_free(struct bnxt *bp, uint8_t type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_FREE, BNXT_USE_CHIMP_MB);
 	req.tunnel_type = type;
 	req.dest_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4821,7 +4823,7 @@ int bnxt_hwrm_tunnel_redirect_query(struct bnxt *bp, uint32_t *type)
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_QUERY_TUNNEL_TYPE, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT();
@@ -4842,7 +4844,7 @@ int bnxt_hwrm_tunnel_redirect_info(struct bnxt *bp, uint8_t tun_type,
 		bp->hwrm_cmd_resp_addr;
 	int rc = 0;
 
-	HWRM_PREP(req, CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_CFA_REDIRECT_TUNNEL_TYPE_INFO, BNXT_USE_CHIMP_MB);
 	req.src_fid = bp->fw_fid;
 	req.tunnel_type = tun_type;
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
@@ -4867,7 +4869,7 @@ int bnxt_hwrm_set_mac(struct bnxt *bp)
 	if (!BNXT_VF(bp))
 		return 0;
 
-	HWRM_PREP(req, FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_VF_CFG, BNXT_USE_CHIMP_MB);
 
 	req.enables =
 		rte_cpu_to_le_32(HWRM_FUNC_VF_CFG_INPUT_ENABLES_DFLT_MAC_ADDR);
@@ -4900,7 +4902,7 @@ int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
 	if (!up && (bp->flags & BNXT_FLAG_FW_RESET))
 		return 0;
 
-	HWRM_PREP(req, FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_FUNC_DRV_IF_CHANGE, BNXT_USE_CHIMP_MB);
 
 	if (up)
 		req.flags =
@@ -4946,7 +4948,7 @@ int bnxt_hwrm_error_recovery_qcfg(struct bnxt *bp)
 		memset(info, 0, sizeof(*info));
 	}
 
-	HWRM_PREP(req, ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_ERROR_RECOVERY_QCFG, BNXT_USE_CHIMP_MB);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
@@ -5022,7 +5024,7 @@ int bnxt_hwrm_fw_reset(struct bnxt *bp)
 	if (!BNXT_PF(bp))
 		return -EOPNOTSUPP;
 
-	HWRM_PREP(req, FW_RESET, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_FW_RESET, BNXT_USE_KONG(bp));
 
 	req.embedded_proc_type =
 		HWRM_FW_RESET_INPUT_EMBEDDED_PROC_TYPE_CHIP;
@@ -5050,7 +5052,7 @@ int bnxt_hwrm_port_ts_query(struct bnxt *bp, uint8_t path, uint64_t *timestamp)
 	if (!ptp)
 		return 0;
 
-	HWRM_PREP(req, PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
+	HWRM_PREP(&req, HWRM_PORT_TS_QUERY, BNXT_USE_CHIMP_MB);
 
 	switch (path) {
 	case BNXT_PTP_FLAGS_PATH_TX:
@@ -5098,7 +5100,7 @@ int bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(struct bnxt *bp)
 		return 0;
 	}
 
-	HWRM_PREP(req, CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
+	HWRM_PREP(&req, HWRM_CFA_ADV_FLOW_MGNT_QCAPS, BNXT_USE_KONG(bp));
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_KONG(bp));
 
 	HWRM_CHECK_RESULT();
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 03/34] net/bnxt: add truflow message handlers
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
                         ` (33 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough, Randy Schacher

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add bnxt message functions for truflow APIs

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 83 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h | 18 ++++++++++
 2 files changed, 101 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 93b2ea7..443553b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -257,6 +257,89 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 
 #define HWRM_UNLOCK()		rte_spinlock_unlock(&bp->hwrm_lock)
 
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len)
+{
+	int rc = 0;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+	struct input *req = msg;
+	struct output *resp = bp->hwrm_cmd_resp_addr;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(req, msg_type, mailbox);
+
+	rc = bnxt_hwrm_send_message(bp, req, msg_len, mailbox);
+
+	HWRM_CHECK_RESULT();
+
+	if (resp_msg)
+		memcpy(resp_msg, resp, resp_len);
+
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len)
+{
+	int rc = 0;
+	struct hwrm_cfa_tflib_input req = { .req_type = 0 };
+	struct hwrm_cfa_tflib_output *resp = bp->hwrm_cmd_resp_addr;
+	bool mailbox = BNXT_USE_CHIMP_MB;
+
+	if (msg_len > sizeof(req.tf_req))
+		return -ENOMEM;
+
+	if (use_kong_mb)
+		mailbox = BNXT_USE_KONG(bp);
+
+	HWRM_PREP(&req, HWRM_TF, mailbox);
+	/* Build request using the user supplied request payload.
+	 * TLV request size is checked at build time against HWRM
+	 * request max size, thus no checking required.
+	 */
+	req.tf_type = tf_type;
+	req.tf_subtype = tf_subtype;
+	memcpy(req.tf_req, msg, msg_len);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), mailbox);
+	HWRM_CHECK_RESULT();
+
+	/* Copy the resp to user provided response buffer */
+	if (response != NULL)
+		/* Post process response data. We need to copy only
+		 * the 'payload' as the HWRM data structure really is
+		 * HWRM header + msg header + payload and the TFLIB
+		 * only provided a payload place holder.
+		 */
+		if (response_len != 0) {
+			memcpy(response,
+			       resp->tf_resp,
+			       response_len);
+		}
+
+	/* Extract the internal tflib response code */
+	*tf_response_code = resp->tf_resp_code;
+	HWRM_UNLOCK();
+
+	return rc;
+}
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 {
 	int rc = 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 5eb2ee8..df7aa74 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -69,6 +69,24 @@ HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_L2_HEADER_SOURCE_FIELDS_SUPPORTED
 	bp->rx_cos_queue[x].profile =	\
 		resp->queue_id##x##_service_profile
 
+int bnxt_hwrm_tf_message_tunneled(struct bnxt *bp,
+				  bool use_kong_mb,
+				  uint16_t tf_type,
+				  uint16_t tf_subtype,
+				  uint32_t *tf_response_code,
+				  void *msg,
+				  uint32_t msg_len,
+				  void *response,
+				  uint32_t response_len);
+
+int bnxt_hwrm_tf_message_direct(struct bnxt *bp,
+				bool use_kong_mb,
+				uint16_t msg_type,
+				void *msg,
+				uint32_t msg_len,
+				void *resp_msg,
+				uint32_t resp_len);
+
 int bnxt_hwrm_cfa_l2_clear_rx_mask(struct bnxt *bp,
 				   struct bnxt_vnic_info *vnic);
 int bnxt_hwrm_cfa_l2_set_rx_mask(struct bnxt *bp, struct bnxt_vnic_info *vnic,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (2 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-16 17:39         ` Ferruh Yigit
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
                         ` (32 subsequent siblings)
  36 siblings, 1 reply; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt, Venkat Duvvuru

From: Michael Wildt <michael.wildt@broadcom.com>

- Add infrastructure support
- Add tf_core open session support

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/Makefile                |   8 +
 drivers/net/bnxt/bnxt.h                  |   4 +
 drivers/net/bnxt/tf_core/hwrm_tf.h       | 971 +++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.c       | 145 +++++
 drivers/net/bnxt/tf_core/tf_core.h       | 347 +++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c        |  79 +++
 drivers/net/bnxt/tf_core/tf_msg.h        |  44 ++
 drivers/net/bnxt/tf_core/tf_msg_common.h |  47 ++
 drivers/net/bnxt/tf_core/tf_project.h    |  24 +
 drivers/net/bnxt/tf_core/tf_resources.h  |  46 ++
 drivers/net/bnxt/tf_core/tf_rm.h         |  33 ++
 drivers/net/bnxt/tf_core/tf_session.h    |  85 +++
 drivers/net/bnxt/tf_core/tfp.c           | 163 ++++++
 drivers/net/bnxt/tf_core/tfp.h           | 188 ++++++
 14 files changed, 2184 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
 create mode 100644 drivers/net/bnxt/tf_core/tfp.c
 create mode 100644 drivers/net/bnxt/tf_core/tfp.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index b77532b..8a68059 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -43,6 +43,14 @@ ifeq ($(CONFIG_RTE_ARCH_X86), y)
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
+ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+endif
+
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index b795ed6..a8e57ca 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -21,6 +21,8 @@
 #include "bnxt_cpr.h"
 #include "bnxt_util.h"
 
+#include "tf_core.h"
+
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
 
@@ -679,6 +681,8 @@ struct bnxt {
 /* TCAM and EM should be 16-bit only. Other modes not supported. */
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
+
+	struct tf               tfp;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
new file mode 100644
index 0000000..a8a5547
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -0,0 +1,971 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _HWRM_TF_H_
+#define _HWRM_TF_H_
+
+#include "tf_core.h"
+
+typedef enum tf_type {
+	TF_TYPE_TRUFLOW,
+	TF_TYPE_LAST = TF_TYPE_TRUFLOW,
+} tf_type_t;
+
+typedef enum tf_subtype {
+	HWRM_TFT_SESSION_ATTACH = 712,
+	HWRM_TFT_SESSION_HW_RESC_QCAPS = 721,
+	HWRM_TFT_SESSION_HW_RESC_ALLOC = 722,
+	HWRM_TFT_SESSION_HW_RESC_FREE = 723,
+	HWRM_TFT_SESSION_HW_RESC_FLUSH = 724,
+	HWRM_TFT_SESSION_SRAM_RESC_QCAPS = 725,
+	HWRM_TFT_SESSION_SRAM_RESC_ALLOC = 726,
+	HWRM_TFT_SESSION_SRAM_RESC_FREE = 727,
+	HWRM_TFT_SESSION_SRAM_RESC_FLUSH = 728,
+	HWRM_TFT_TBL_SCOPE_CFG = 731,
+	HWRM_TFT_EM_RULE_INSERT = 739,
+	HWRM_TFT_EM_RULE_DELETE = 740,
+	HWRM_TFT_REG_GET = 821,
+	HWRM_TFT_REG_SET = 822,
+	HWRM_TFT_TBL_TYPE_SET = 823,
+	HWRM_TFT_TBL_TYPE_GET = 824,
+	TF_SUBTYPE_LAST = HWRM_TFT_TBL_TYPE_GET,
+} tf_subtype_t;
+
+/* Request and Response compile time checking */
+/* u32_t	tlv_req_value[26]; */
+#define TF_MAX_REQ_SIZE 104
+/* u32_t	tlv_resp_value[170]; */
+#define TF_MAX_RESP_SIZE 680
+#define BUILD_BUG_ON(condition) typedef char p__LINE__[(condition) ? 1 : -1]
+
+/* Use this to allocate/free any kind of
+ * indexes over HWRM and fill the parms pointer
+ */
+#define TF_BULK_RECV	 128
+#define TF_BULK_SEND	  16
+
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_INSERT_KEY_DATA 0x2e30UL
+/* EM Key value */
+#define TF_DEV_DATA_TYPE_TF_EM_RULE_DELETE_KEY_DATA 0x2e40UL
+/* L2 Context DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_DMA_ADDR		0x2fe0UL
+/* L2 Context Entry */
+#define TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY		0x2fe1UL
+/* Prof tcam DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_DMA_ADDR		0x3030UL
+/* Prof tcam Entry */
+#define TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY		0x3031UL
+/* WC DMA Address Type */
+#define TF_DEV_DATA_TYPE_TF_WC_DMA_ADDR			0x30d0UL
+/* WC Entry */
+#define TF_DEV_DATA_TYPE_TF_WC_ENTRY			0x30d1UL
+/* Action Data */
+#define TF_DEV_DATA_TYPE_TF_ACTION_DATA			0x3170UL
+#define TF_DEV_DATA_TYPE_LAST   TF_DEV_DATA_TYPE_TF_ACTION_DATA
+
+#define TF_BITS2BYTES(x) (((x) + 7) >> 3)
+#define TF_BITS2BYTES_WORD_ALIGN(x) ((((x) + 31) >> 5) * 4)
+
+struct tf_session_attach_input;
+struct tf_session_hw_resc_qcaps_input;
+struct tf_session_hw_resc_qcaps_output;
+struct tf_session_hw_resc_alloc_input;
+struct tf_session_hw_resc_alloc_output;
+struct tf_session_hw_resc_free_input;
+struct tf_session_hw_resc_flush_input;
+struct tf_session_sram_resc_qcaps_input;
+struct tf_session_sram_resc_qcaps_output;
+struct tf_session_sram_resc_alloc_input;
+struct tf_session_sram_resc_alloc_output;
+struct tf_session_sram_resc_free_input;
+struct tf_session_sram_resc_flush_input;
+struct tf_tbl_type_set_input;
+struct tf_tbl_type_get_input;
+struct tf_tbl_type_get_output;
+struct tf_em_internal_insert_input;
+struct tf_em_internal_insert_output;
+struct tf_em_internal_delete_input;
+/* Input params for session attach */
+typedef struct tf_session_attach_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* Session Name */
+	char				 session_name[TF_SESSION_NAME_MAX];
+} tf_session_attach_input_t, *ptf_session_attach_input_t;
+BUILD_BUG_ON(sizeof(tf_session_attach_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_hw_resc_qcaps_input_t, *ptf_session_hw_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW qcaps */
+typedef struct tf_session_hw_resc_qcaps_output {
+	/* Control Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_HW_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Unused */
+	uint8_t			  unused[4];
+	/* Minimum guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_min;
+	/* Maximum non-guaranteed number of L2 Ctx */
+	uint16_t			 l2_ctx_tcam_entries_max;
+	/* Minimum guaranteed number of profile functions */
+	uint16_t			 prof_func_min;
+	/* Maximum non-guaranteed number of profile functions */
+	uint16_t			 prof_func_max;
+	/* Minimum guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_min;
+	/* Maximum non-guaranteed number of profile TCAM entries */
+	uint16_t			 prof_tcam_entries_max;
+	/* Minimum guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_min;
+	/* Maximum non-guaranteed number of EM profile ID */
+	uint16_t			 em_prof_id_max;
+	/* Minimum guaranteed number of EM records entries */
+	uint16_t			 em_record_entries_min;
+	/* Maximum non-guaranteed number of EM record entries */
+	uint16_t			 em_record_entries_max;
+	/* Minimum guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_min;
+	/* Maximum non-guaranteed number of WC TCAM profile ID */
+	uint16_t			 wc_tcam_prof_id_max;
+	/* Minimum guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_min;
+	/* Maximum non-guaranteed number of WC TCAM entries */
+	uint16_t			 wc_tcam_entries_max;
+	/* Minimum guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_min;
+	/* Maximum non-guaranteed number of meter profiles */
+	uint16_t			 meter_profiles_max;
+	/* Minimum guaranteed number of meter instances */
+	uint16_t			 meter_inst_min;
+	/* Maximum non-guaranteed number of meter instances */
+	uint16_t			 meter_inst_max;
+	/* Minimum guaranteed number of mirrors */
+	uint16_t			 mirrors_min;
+	/* Maximum non-guaranteed number of mirrors */
+	uint16_t			 mirrors_max;
+	/* Minimum guaranteed number of UPAR */
+	uint16_t			 upar_min;
+	/* Maximum non-guaranteed number of UPAR */
+	uint16_t			 upar_max;
+	/* Minimum guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_min;
+	/* Maximum non-guaranteed number of SP TCAM entries */
+	uint16_t			 sp_tcam_entries_max;
+	/* Minimum guaranteed number of L2 Functions */
+	uint16_t			 l2_func_min;
+	/* Maximum non-guaranteed number of L2 Functions */
+	uint16_t			 l2_func_max;
+	/* Minimum guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_min;
+	/* Maximum non-guaranteed number of flexible key templates */
+	uint16_t			 flex_key_templ_max;
+	/* Minimum guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_min;
+	/* Maximum non-guaranteed number of table Scopes */
+	uint16_t			 tbl_scope_max;
+	/* Minimum guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_min;
+	/* Maximum non-guaranteed number of epoch0 entries */
+	uint16_t			 epoch0_entries_max;
+	/* Minimum guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_min;
+	/* Maximum non-guaranteed number of epoch1 entries */
+	uint16_t			 epoch1_entries_max;
+	/* Minimum guaranteed number of metadata */
+	uint16_t			 metadata_min;
+	/* Maximum non-guaranteed number of metadata */
+	uint16_t			 metadata_max;
+	/* Minimum guaranteed number of CT states */
+	uint16_t			 ct_state_min;
+	/* Maximum non-guaranteed number of CT states */
+	uint16_t			 ct_state_max;
+	/* Minimum guaranteed number of range profiles */
+	uint16_t			 range_prof_min;
+	/* Maximum non-guaranteed number range profiles */
+	uint16_t			 range_prof_max;
+	/* Minimum guaranteed number of range entries */
+	uint16_t			 range_entries_min;
+	/* Maximum non-guaranteed number of range entries */
+	uint16_t			 range_entries_max;
+	/* Minimum guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_min;
+	/* Maximum non-guaranteed number of LAG table entries */
+	uint16_t			 lag_tbl_entries_max;
+} tf_session_hw_resc_qcaps_output_t, *ptf_session_hw_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of L2 CTX TCAM entries to be allocated */
+	uint16_t			 num_l2_ctx_tcam_entries;
+	/* Number of profile functions to be allocated */
+	uint16_t			 num_prof_func_entries;
+	/* Number of profile TCAM entries to be allocated */
+	uint16_t			 num_prof_tcam_entries;
+	/* Number of EM profile ids to be allocated */
+	uint16_t			 num_em_prof_id;
+	/* Number of EM records entries to be allocated */
+	uint16_t			 num_em_record_entries;
+	/* Number of WC profiles ids to be allocated */
+	uint16_t			 num_wc_tcam_prof_id;
+	/* Number of WC TCAM entries to be allocated */
+	uint16_t			 num_wc_tcam_entries;
+	/* Number of meter profiles to be allocated */
+	uint16_t			 num_meter_profiles;
+	/* Number of meter instances to be allocated */
+	uint16_t			 num_meter_inst;
+	/* Number of mirrors to be allocated */
+	uint16_t			 num_mirrors;
+	/* Number of UPAR to be allocated */
+	uint16_t			 num_upar;
+	/* Number of SP TCAM entries to be allocated */
+	uint16_t			 num_sp_tcam_entries;
+	/* Number of L2 functions to be allocated */
+	uint16_t			 num_l2_func;
+	/* Number of flexible key templates to be allocated */
+	uint16_t			 num_flex_key_templ;
+	/* Number of table scopes to be allocated */
+	uint16_t			 num_tbl_scope;
+	/* Number of epoch0 entries to be allocated */
+	uint16_t			 num_epoch0_entries;
+	/* Number of epoch1 entries to be allocated */
+	uint16_t			 num_epoch1_entries;
+	/* Number of metadata to be allocated */
+	uint16_t			 num_metadata;
+	/* Number of CT states to be allocated */
+	uint16_t			 num_ct_state;
+	/* Number of range profiles to be allocated */
+	uint16_t			 num_range_prof;
+	/* Number of range Entries to be allocated */
+	uint16_t			 num_range_entries;
+	/* Number of LAG table entries to be allocated */
+	uint16_t			 num_lag_tbl_entries;
+} tf_session_hw_resc_alloc_input_t, *ptf_session_hw_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource HW alloc */
+typedef struct tf_session_hw_resc_alloc_output {
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_alloc_output_t, *ptf_session_hw_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource HW free */
+typedef struct tf_session_hw_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_HW_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_free_input_t, *ptf_session_hw_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource HW flush */
+typedef struct tf_session_hw_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_HW_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of L2 CTX TCAM entries allocated to the session */
+	uint16_t			 l2_ctx_tcam_entries_start;
+	/* Number of L2 CTX TCAM entries allocated */
+	uint16_t			 l2_ctx_tcam_entries_stride;
+	/* Starting index of profile functions allocated to the session */
+	uint16_t			 prof_func_start;
+	/* Number of profile functions allocated */
+	uint16_t			 prof_func_stride;
+	/* Starting index of profile TCAM entries allocated to the session */
+	uint16_t			 prof_tcam_entries_start;
+	/* Number of profile TCAM entries allocated */
+	uint16_t			 prof_tcam_entries_stride;
+	/* Starting index of EM profile ids allocated to the session */
+	uint16_t			 em_prof_id_start;
+	/* Number of EM profile ids allocated */
+	uint16_t			 em_prof_id_stride;
+	/* Starting index of EM record entries allocated to the session */
+	uint16_t			 em_record_entries_start;
+	/* Number of EM record entries allocated */
+	uint16_t			 em_record_entries_stride;
+	/* Starting index of WC TCAM profiles ids allocated to the session */
+	uint16_t			 wc_tcam_prof_id_start;
+	/* Number of WC TCAM profile ids allocated */
+	uint16_t			 wc_tcam_prof_id_stride;
+	/* Starting index of WC TCAM entries allocated to the session */
+	uint16_t			 wc_tcam_entries_start;
+	/* Number of WC TCAM allocated */
+	uint16_t			 wc_tcam_entries_stride;
+	/* Starting index of meter profiles allocated to the session */
+	uint16_t			 meter_profiles_start;
+	/* Number of meter profiles allocated */
+	uint16_t			 meter_profiles_stride;
+	/* Starting index of meter instance allocated to the session */
+	uint16_t			 meter_inst_start;
+	/* Number of meter instance allocated */
+	uint16_t			 meter_inst_stride;
+	/* Starting index of mirrors allocated to the session */
+	uint16_t			 mirrors_start;
+	/* Number of mirrors allocated */
+	uint16_t			 mirrors_stride;
+	/* Starting index of UPAR allocated to the session */
+	uint16_t			 upar_start;
+	/* Number of UPAR allocated */
+	uint16_t			 upar_stride;
+	/* Starting index of SP TCAM entries allocated to the session */
+	uint16_t			 sp_tcam_entries_start;
+	/* Number of SP TCAM entries allocated */
+	uint16_t			 sp_tcam_entries_stride;
+	/* Starting index of L2 functions allocated to the session */
+	uint16_t			 l2_func_start;
+	/* Number of L2 functions allocated */
+	uint16_t			 l2_func_stride;
+	/* Starting index of flexible key templates allocated to the session */
+	uint16_t			 flex_key_templ_start;
+	/* Number of flexible key templates allocated */
+	uint16_t			 flex_key_templ_stride;
+	/* Starting index of table scopes allocated to the session */
+	uint16_t			 tbl_scope_start;
+	/* Number of table scopes allocated */
+	uint16_t			 tbl_scope_stride;
+	/* Starting index of epoch0 entries allocated to the session */
+	uint16_t			 epoch0_entries_start;
+	/* Number of epoch0 entries allocated */
+	uint16_t			 epoch0_entries_stride;
+	/* Starting index of epoch1 entries allocated to the session */
+	uint16_t			 epoch1_entries_start;
+	/* Number of epoch1 entries allocated */
+	uint16_t			 epoch1_entries_stride;
+	/* Starting index of metadata allocated to the session */
+	uint16_t			 metadata_start;
+	/* Number of metadata allocated */
+	uint16_t			 metadata_stride;
+	/* Starting index of CT states allocated to the session */
+	uint16_t			 ct_state_start;
+	/* Number of CT states allocated */
+	uint16_t			 ct_state_stride;
+	/* Starting index of range profiles allocated to the session */
+	uint16_t			 range_prof_start;
+	/* Number range profiles allocated */
+	uint16_t			 range_prof_stride;
+	/* Starting index of range enntries allocated to the session */
+	uint16_t			 range_entries_start;
+	/* Number of range entries allocated */
+	uint16_t			 range_entries_stride;
+	/* Starting index of LAG table entries allocated to the session */
+	uint16_t			 lag_tbl_entries_start;
+	/* Number of LAG table entries allocated */
+	uint16_t			 lag_tbl_entries_stride;
+} tf_session_hw_resc_flush_input_t, *ptf_session_hw_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_hw_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_QCAPS_INPUT_FLAGS_DIR_TX	  (0x1)
+} tf_session_sram_resc_qcaps_input_t, *ptf_session_sram_resc_qcaps_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM qcaps */
+typedef struct tf_session_sram_resc_qcaps_output {
+	/* Flags */
+	uint32_t			 flags;
+	/* When set to 0, indicates Static partitioning */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_STATIC	  (0x0)
+	/* When set to 1, indicates Strategy 1 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_1	  (0x1)
+	/* When set to 1, indicates Strategy 2 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_2	  (0x2)
+	/* When set to 1, indicates Strategy 3 */
+#define TF_SESSION_SRAM_RESC_QCAPS_OUTPUT_FLAGS_SESS_RES_STRATEGY_3	  (0x3)
+	/* Minimum guaranteed number of Full Action */
+	uint16_t			 full_action_min;
+	/* Maximum non-guaranteed number of Full Action */
+	uint16_t			 full_action_max;
+	/* Minimum guaranteed number of MCG */
+	uint16_t			 mcg_min;
+	/* Maximum non-guaranteed number of MCG */
+	uint16_t			 mcg_max;
+	/* Minimum guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_min;
+	/* Maximum non-guaranteed number of Encap 8B */
+	uint16_t			 encap_8b_max;
+	/* Minimum guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_min;
+	/* Maximum non-guaranteed number of Encap 16B */
+	uint16_t			 encap_16b_max;
+	/* Minimum guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_min;
+	/* Maximum non-guaranteed number of Encap 64B */
+	uint16_t			 encap_64b_max;
+	/* Minimum guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_min;
+	/* Maximum non-guaranteed number of SP SMAC */
+	uint16_t			 sp_smac_max;
+	/* Minimum guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv4 */
+	uint16_t			 sp_smac_ipv4_max;
+	/* Minimum guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_min;
+	/* Maximum non-guaranteed number of SP SMAC IPv6 */
+	uint16_t			 sp_smac_ipv6_max;
+	/* Minimum guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_min;
+	/* Maximum non-guaranteed number of Counter 64B */
+	uint16_t			 counter_64b_max;
+	/* Minimum guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_min;
+	/* Maximum non-guaranteed number of NAT SPORT */
+	uint16_t			 nat_sport_max;
+	/* Minimum guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_min;
+	/* Maximum non-guaranteed number of NAT DPORT */
+	uint16_t			 nat_dport_max;
+	/* Minimum guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_min;
+	/* Maximum non-guaranteed number of NAT S_IPV4 */
+	uint16_t			 nat_s_ipv4_max;
+	/* Minimum guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_min;
+	/* Maximum non-guaranteed number of NAT D_IPV4 */
+	uint16_t			 nat_d_ipv4_max;
+} tf_session_sram_resc_qcaps_output_t, *ptf_session_sram_resc_qcaps_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_qcaps_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_input {
+	/* FW Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_ALLOC_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Number of full action SRAM entries to be allocated */
+	uint16_t			 num_full_action;
+	/* Number of multicast groups to be allocated */
+	uint16_t			 num_mcg;
+	/* Number of Encap 8B entries to be allocated */
+	uint16_t			 num_encap_8b;
+	/* Number of Encap 16B entries to be allocated */
+	uint16_t			 num_encap_16b;
+	/* Number of Encap 64B entries to be allocated */
+	uint16_t			 num_encap_64b;
+	/* Number of SP SMAC entries to be allocated */
+	uint16_t			 num_sp_smac;
+	/* Number of SP SMAC IPv4 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv4;
+	/* Number of SP SMAC IPv6 entries to be allocated */
+	uint16_t			 num_sp_smac_ipv6;
+	/* Number of Counter 64B entries to be allocated */
+	uint16_t			 num_counter_64b;
+	/* Number of NAT source ports to be allocated */
+	uint16_t			 num_nat_sport;
+	/* Number of NAT destination ports to be allocated */
+	uint16_t			 num_nat_dport;
+	/* Number of NAT source iPV4 addresses to be allocated */
+	uint16_t			 num_nat_s_ipv4;
+	/* Number of NAT destination IPV4 addresses to be allocated */
+	uint16_t			 num_nat_d_ipv4;
+} tf_session_sram_resc_alloc_input_t, *ptf_session_sram_resc_alloc_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for session resource SRAM alloc */
+typedef struct tf_session_sram_resc_alloc_output {
+	/* Unused */
+	uint8_t			  unused[2];
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_alloc_output_t, *ptf_session_sram_resc_alloc_output_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_alloc_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for session resource SRAM free */
+typedef struct tf_session_sram_resc_free_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the query apply to RX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the query apply to TX */
+#define TF_SESSION_SRAM_RESC_FREE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_free_input_t, *ptf_session_sram_resc_free_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_free_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for session resource SRAM flush */
+typedef struct tf_session_sram_resc_flush_input {
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the flush apply to RX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the flush apply to TX */
+#define TF_SESSION_SRAM_RESC_FLUSH_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* Starting index of full action SRAM entries allocated to the session */
+	uint16_t			 full_action_start;
+	/* Number of full action SRAM entries allocated */
+	uint16_t			 full_action_stride;
+	/* Starting index of multicast groups allocated to this session */
+	uint16_t			 mcg_start;
+	/* Number of multicast groups allocated */
+	uint16_t			 mcg_stride;
+	/* Starting index of encap 8B entries allocated to the session */
+	uint16_t			 encap_8b_start;
+	/* Number of encap 8B entries allocated */
+	uint16_t			 encap_8b_stride;
+	/* Starting index of encap 16B entries allocated to the session */
+	uint16_t			 encap_16b_start;
+	/* Number of encap 16B entries allocated */
+	uint16_t			 encap_16b_stride;
+	/* Starting index of encap 64B entries allocated to the session */
+	uint16_t			 encap_64b_start;
+	/* Number of encap 64B entries allocated */
+	uint16_t			 encap_64b_stride;
+	/* Starting index of SP SMAC entries allocated to the session */
+	uint16_t			 sp_smac_start;
+	/* Number of SP SMAC entries allocated */
+	uint16_t			 sp_smac_stride;
+	/* Starting index of SP SMAC IPv4 entries allocated to the session */
+	uint16_t			 sp_smac_ipv4_start;
+	/* Number of SP SMAC IPv4 entries allocated */
+	uint16_t			 sp_smac_ipv4_stride;
+	/* Starting index of SP SMAC IPv6 entries allocated to the session */
+	uint16_t			 sp_smac_ipv6_start;
+	/* Number of SP SMAC IPv6 entries allocated */
+	uint16_t			 sp_smac_ipv6_stride;
+	/* Starting index of Counter 64B entries allocated to the session */
+	uint16_t			 counter_64b_start;
+	/* Number of Counter 64B entries allocated */
+	uint16_t			 counter_64b_stride;
+	/* Starting index of NAT source ports allocated to the session */
+	uint16_t			 nat_sport_start;
+	/* Number of NAT source ports allocated */
+	uint16_t			 nat_sport_stride;
+	/* Starting index of NAT destination ports allocated to the session */
+	uint16_t			 nat_dport_start;
+	/* Number of NAT destination ports allocated */
+	uint16_t			 nat_dport_stride;
+	/* Starting index of NAT source IPV4 addresses allocated to the session */
+	uint16_t			 nat_s_ipv4_start;
+	/* Number of NAT source IPV4 addresses allocated */
+	uint16_t			 nat_s_ipv4_stride;
+	/*
+	 * Starting index of NAT destination IPV4 addresses allocated to the
+	 * session
+	 */
+	uint16_t			 nat_d_ipv4_start;
+	/* Number of NAT destination IPV4 addresses allocated */
+	uint16_t			 nat_d_ipv4_stride;
+} tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
+BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Input params for table type get */
+typedef struct tf_tbl_type_get_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_GET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Index to get */
+	uint32_t			 index;
+} tf_tbl_type_get_input_t, *ptf_tbl_type_get_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for table type get */
+typedef struct tf_tbl_type_get_output {
+	/* Size of the data read in bytes */
+	uint16_t			 size;
+	/* Data read */
+	uint8_t			  data[TF_BULK_RECV];
+} tf_tbl_type_get_output_t, *ptf_tbl_type_get_output_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_get_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM internal rule insert */
+typedef struct tf_em_internal_insert_input {
+	/* Firmware Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_INSERT_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* strength */
+	uint16_t			 strength;
+	/* index to action */
+	uint32_t			 action_ptr;
+	/* index of em record */
+	uint32_t			 em_record_idx;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_insert_input_t, *ptf_em_internal_insert_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_input_t) <= TF_MAX_REQ_SIZE);
+
+/* Output params for EM internal rule insert */
+typedef struct tf_em_internal_insert_output {
+	/* EM record pointer index */
+	uint16_t			 rptr_index;
+	/* EM record offset 0~3 */
+	uint8_t			  rptr_entry;
+} tf_em_internal_insert_output_t, *ptf_em_internal_insert_output_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_insert_output_t) <= TF_MAX_RESP_SIZE);
+
+/* Input params for EM INTERNAL rule delete */
+typedef struct tf_em_internal_delete_input {
+	/* Session Id */
+	uint32_t			 tf_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_RX	  (0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_EM_INTERNAL_DELETE_INPUT_FLAGS_DIR_TX	  (0x1)
+	/* EM internal flow hanndle */
+	uint64_t			 flow_handle;
+	/* EM Key value */
+	uint64_t			 em_key[8];
+	/* number of bits in em_key */
+	uint16_t			 em_key_bitlen;
+} tf_em_internal_delete_input_t, *ptf_em_internal_delete_input_t;
+BUILD_BUG_ON(sizeof(tf_em_internal_delete_input_t) <= TF_MAX_REQ_SIZE);
+
+#endif /* _HWRM_TF_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
new file mode 100644
index 0000000..6bafae5
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "bnxt.h"
+
+int
+tf_open_session(struct tf                    *tfp,
+		struct tf_open_session_parms *parms)
+{
+	int rc;
+	struct tf_session *session;
+	struct tfp_calloc_parms alloc_parms;
+	unsigned int domain, bus, slot, device;
+	uint8_t fw_session_id;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	/* Filter out any non-supported device types on the Core
+	 * side. It is assumed that the Firmware will be supported if
+	 * firmware open session succeeds.
+	 */
+	if (parms->device_type != TF_DEVICE_TYPE_WH)
+		return -ENOTSUP;
+
+	/* Build the beginning of session_id */
+	rc = sscanf(parms->ctrl_chan_name,
+		    "%x:%x:%x.%d",
+		    &domain,
+		    &bus,
+		    &slot,
+		    &device);
+	if (rc != 4) {
+		PMD_DRV_LOG(ERR,
+			    "Failed to scan device ctrl_chan_name\n");
+		return -EINVAL;
+	}
+
+	/* open FW session and get a new session_id */
+	rc = tf_msg_session_open(tfp,
+				 parms->ctrl_chan_name,
+				 &fw_session_id);
+	if (rc) {
+		/* Log error */
+		if (rc == -EEXIST)
+			PMD_DRV_LOG(ERR,
+				    "Session is already open, rc:%d\n",
+				    rc);
+		else
+			PMD_DRV_LOG(ERR,
+				    "Open message send failed, rc:%d\n",
+				    rc);
+
+		parms->session_id.id = TF_FW_SESSION_ID_INVALID;
+		return rc;
+	}
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session_info);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session info, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session = (struct tf_session_info *)alloc_parms.mem_va;
+
+	/* Allocate core data for the session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = sizeof(struct tf_session);
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate session data, rc:%d\n",
+			    rc);
+		goto cleanup;
+	}
+
+	tfp->session->core_data = alloc_parms.mem_va;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	tfp_memcpy(session->ctrl_chan_name,
+		   parms->ctrl_chan_name,
+		   TF_SESSION_NAME_MAX);
+
+	/* Initialize Session */
+	session->device_type = parms->device_type;
+
+	/* Construct the Session ID */
+	session->session_id.internal.domain = domain;
+	session->session_id.internal.bus = bus;
+	session->session_id.internal.device = device;
+	session->session_id.internal.fw_session_id = fw_session_id;
+
+	rc = tf_msg_session_qcfg(tfp);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Query config message send failed, rc:%d\n",
+			    rc);
+		goto cleanup_close;
+	}
+
+	session->ref_count++;
+
+	/* Return session ID */
+	parms->session_id = session->session_id;
+
+	PMD_DRV_LOG(INFO,
+		    "Session created, session_id:%d\n",
+		    parms->session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    parms->session_id.internal.domain,
+		    parms->session_id.internal.bus,
+		    parms->session_id.internal.device,
+		    parms->session_id.internal.fw_session_id);
+
+	return 0;
+
+ cleanup:
+	tfp_free(tfp->session->core_data);
+	tfp_free(tfp->session);
+	tfp->session = NULL;
+	return rc;
+
+ cleanup_close:
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
new file mode 100644
index 0000000..69433ac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -0,0 +1,347 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_CORE_H_
+#define _TF_CORE_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdio.h>
+
+#include "tf_project.h"
+
+/**
+ * @file
+ *
+ * Truflow Core API Header File
+ */
+
+/********** BEGIN Truflow Core DEFINITIONS **********/
+
+/**
+ * direction
+ */
+enum tf_dir {
+	TF_DIR_RX,  /**< Receive */
+	TF_DIR_TX,  /**< Transmit */
+	TF_DIR_MAX
+};
+
+/********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
+
+/**
+ * @page general General
+ *
+ * @ref tf_open_session
+ *
+ * @ref tf_attach_session
+ *
+ * @ref tf_close_session
+ */
+
+
+/** Session Version defines
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ */
+#define TF_SESSION_VER_MAJOR  1   /**< Major Version */
+#define TF_SESSION_VER_MINOR  0   /**< Minor Version */
+#define TF_SESSION_VER_UPDATE 0   /**< Update Version */
+
+/** Session Name
+ *
+ * Name of the TruFlow control channel interface.  Expects
+ * format to be RTE Name specific, i.e. rte_eth_dev_get_name_by_port()
+ */
+#define TF_SESSION_NAME_MAX       64
+
+#define TF_FW_SESSION_ID_INVALID  0xFF  /**< Invalid FW Sesssion ID define */
+
+/** Session Identifier
+ *
+ * Unique session identifier which includes PCIe bus info to
+ * distinguish the PF and session info to identify the associated
+ * TruFlow session. Session ID is constructed from the passed in
+ * ctrl_chan_name in tf_open_session() together with an allocated
+ * fw_session_id. Done by TruFlow on tf_open_session().
+ */
+union tf_session_id {
+	uint32_t id;
+	struct {
+		uint8_t domain;
+		uint8_t bus;
+		uint8_t device;
+		uint8_t fw_session_id;
+	} internal;
+};
+
+/** Session Version
+ *
+ * The version controls the format of the tf_session and
+ * tf_session_info structure. This is to assure upgrade between
+ * versions can be supported.
+ *
+ * Please see the TF_VER_MAJOR/MINOR and UPDATE defines.
+ */
+struct tf_session_version {
+	uint8_t major;
+	uint8_t minor;
+	uint8_t update;
+};
+
+/** Session supported device types
+ *
+ */
+enum tf_device_type {
+	TF_DEVICE_TYPE_WH = 0, /**< Whitney+  */
+	TF_DEVICE_TYPE_BRD2,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD3,   /**< TBD       */
+	TF_DEVICE_TYPE_BRD4,   /**< TBD       */
+	TF_DEVICE_TYPE_MAX     /**< Maximum   */
+};
+
+/** TruFlow Session Information
+ *
+ * Structure defining a TruFlow Session, also known as a Management
+ * session. This structure is initialized at time of
+ * tf_open_session(). It is passed to all of the TruFlow APIs as way
+ * to prescribe and isolate resources between different TruFlow ULP
+ * Applications.
+ */
+struct tf_session_info {
+	/**
+	 * TrueFlow Version. Used to control the structure layout when
+	 * sharing sessions. No guarantee that a secondary process
+	 * would come from the same version of an executable.
+	 * TruFlow initializes this variable on tf_open_session().
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	struct tf_session_version ver;
+	/**
+	 * will be STAILQ_ENTRY(tf_session_info) next
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	void                 *next;
+	/**
+	 * Session ID is a unique identifier for the session. TruFlow
+	 * initializes this variable during tf_open_session()
+	 * processing.
+	 *
+	 * Owner:  TruFlow
+	 * Access: Truflow & ULP
+	 */
+	union tf_session_id   session_id;
+	/**
+	 * Protects access to core_data. Lock is initialized and owned
+	 * by ULP. TruFlow can access the core_data without checking
+	 * the lock.
+	 *
+	 * Owner:  ULP
+	 * Access: ULP
+	 */
+	uint8_t               spin_lock;
+	/**
+	 * The core_data holds the TruFlow tf_session data
+	 * structure. This memory is allocated and owned by TruFlow on
+	 * tf_open_session().
+	 *
+	 * TruFlow uses this memory for session management control
+	 * until the session is closed by ULP. Access control is done
+	 * by the spin_lock which ULP controls ahead of TruFlow API
+	 * calls.
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	void                 *core_data;
+	/**
+	 * The core_data_sz_bytes specifies the size of core_data in
+	 * bytes.
+	 *
+	 * The size is set by TruFlow on tf_open_session().
+	 *
+	 * Please see tf_open_session_parms for specification details
+	 * on this variable.
+	 *
+	 * Owner:  TruFlow
+	 * Access: TruFlow
+	 */
+	uint32_t              core_data_sz_bytes;
+};
+
+/** TruFlow handle
+ *
+ * Contains a pointer to the session info. Allocated by ULP and passed
+ * to TruFlow using tf_open_session(). TruFlow will populate the
+ * session info at that time. Additional 'opens' can be done using
+ * same session_info by using tf_attach_session().
+ *
+ * It is expected that ULP allocates this memory as shared memory.
+ *
+ * NOTE: This struct must be within the BNXT PMD struct bnxt
+ *       (bp). This allows use of container_of() to get access to the PMD.
+ */
+struct tf {
+	struct tf_session_info *session;
+};
+
+
+/**
+ * tf_open_session parameters definition.
+ */
+struct tf_open_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+	/** [in] shadow_copy
+	 *
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow to keep track of
+	 * resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+	/** [in/out] session_id
+	 *
+	 * Session_id is unique per session.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id.
+	 *
+	 * The session_id allows a session to be shared between devices.
+	 */
+	union tf_session_id session_id;
+	/** [in] device type
+	 *
+	 * Device type is passed, one of Wh+, Brd2, Brd3, Brd4
+	 */
+	enum tf_device_type device_type;
+};
+
+/**
+ * Opens a new TruFlow management session.
+ *
+ * TruFlow will allocate session specific memory, shared memory, to
+ * hold its session data. This data is private to TruFlow.
+ *
+ * Multiple PFs can share the same session. An association, refcount,
+ * between session and PFs is maintained within TruFlow. Thus, a PF
+ * can attach to an existing session, see tf_attach_session().
+ *
+ * No other TruFlow APIs will succeed unless this API is first called and
+ * succeeds.
+ *
+ * tf_open_session() returns a session id that can be used on attach.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ * [in] parms
+ *   Pointer to open parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_open_session(struct tf *tfp,
+		    struct tf_open_session_parms *parms);
+
+struct tf_attach_session_parms {
+	/** [in] ctrl_chan_name
+	 *
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * The ctrl_chan_name can be looked up by using
+	 * rte_eth_dev_get_name_by_port() within the ULP.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] attach_chan_name
+	 *
+	 * String containing name of attach channel interface to be
+	 * used for this session.
+	 *
+	 * The attach_chan_name must be given to a 2nd process after
+	 * the primary process has been created. This is the
+	 * ctrl_chan_name of the primary process and is used to find
+	 * the shared memory for the session that the attach is going
+	 * to use.
+	 */
+	char attach_chan_name[TF_SESSION_NAME_MAX];
+
+	/** [in] session_id
+	 *
+	 * Session_id is unique per session. For Attach the session_id
+	 * should be the session_id that was returned on the first
+	 * open.
+	 *
+	 * Session_id is composed of domain, bus, device and
+	 * fw_session_id. The construction is done by parsing the
+	 * ctrl_chan_name together with allocation of a fw_session_id
+	 * during tf_open_session().
+	 *
+	 * A reference count will be incremented on attach. A session
+	 * is first fully closed when reference count is zero by
+	 * calling tf_close_session().
+	 */
+	union tf_session_id session_id;
+};
+
+/**
+ * Attaches to an existing session. Used when more than one PF wants
+ * to share a single session. In that case all TruFlow management
+ * traffic will be sent to the TruFlow firmware using the 'PF' that
+ * did the attach not the session ctrl channel.
+ *
+ * Attach will increment a ref count as to manage the shared session data.
+ *
+ * [in] tfp, pointer to TF handle
+ * [in] parms, pointer to attach parameters
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_attach_session(struct tf *tfp,
+		      struct tf_attach_session_parms *parms);
+
+/**
+ * Closes an existing session. Cleans up all hardware and firmware
+ * state associated with the TruFlow application session when the last
+ * PF associated with the session results in refcount to be zero.
+ *
+ * Returns success or failure code.
+ */
+int tf_close_session(struct tf *tfp);
+
+#endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
new file mode 100644
index 0000000..2b68681
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tfp.h"
+
+#include "tf_msg_common.h"
+#include "tf_msg.h"
+#include "hsi_struct_def_dpdk.h"
+#include "hwrm_tf.h"
+
+/**
+ * Sends session open request to TF Firmware
+ */
+int
+tf_msg_session_open(struct tf *tfp,
+		    char *ctrl_chan_name,
+		    uint8_t *fw_session_id)
+{
+	int rc;
+	struct hwrm_tf_session_open_input req = { 0 };
+	struct hwrm_tf_session_open_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	memcpy(&req.session_name, ctrl_chan_name, TF_SESSION_NAME_MAX);
+
+	parms.tf_type = HWRM_TF_SESSION_OPEN;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*fw_session_id = resp.fw_session_id;
+
+	return rc;
+}
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int
+tf_msg_session_qcfg(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_qcfg_input  req = { 0 };
+	struct hwrm_tf_session_qcfg_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_QCFG,
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
new file mode 100644
index 0000000..20ebf2e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_H_
+#define _TF_MSG_H_
+
+#include "tf_rm.h"
+
+struct tf;
+
+/**
+ * Sends session open request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] ctrl_chan_name
+ *   PCI name of the control channel
+ *
+ * [in/out] fw_session_id
+ *   Pointer to the fw_session_id that is allocated on firmware side
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_open(struct tf *tfp,
+			char *ctrl_chan_name,
+			uint8_t *fw_session_id);
+
+/**
+ * Sends session query config request to TF Firmware
+ */
+int tf_msg_session_qcfg(struct tf *tfp);
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_query *hw_query);
+
+#endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg_common.h b/drivers/net/bnxt/tf_core/tf_msg_common.h
new file mode 100644
index 0000000..7a4e825
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_msg_common.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_MSG_COMMON_H_
+#define _TF_MSG_COMMON_H_
+
+/* Communication Mailboxes */
+#define TF_CHIMP_MB 0
+#define TF_KONG_MB  1
+
+/* Helper to fill in the parms structure */
+#define MSG_PREP(parms, mb, type, subtype, req, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_REQ(parms, mb, type, subtype, resp) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size  = 0;				\
+		parms.req_data  = NULL;				\
+		parms.resp_size = sizeof(resp);			\
+		parms.resp_data = (uint32_t *)&(resp);		\
+	} while (0)
+
+#define MSG_PREP_NO_RESP(parms, mb, type, subtype, req) do {	\
+		parms.mailbox = mb;				\
+		parms.tf_type = type;				\
+		parms.tf_subtype = subtype;			\
+		parms.tf_resp_code = 0;				\
+		parms.req_size = sizeof(req);			\
+		parms.req_data = (uint32_t *)&(req);		\
+		parms.resp_size = 0;				\
+		parms.resp_data = NULL;				\
+	} while (0)
+
+#endif /* _TF_MSG_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_project.h b/drivers/net/bnxt/tf_core/tf_project.h
new file mode 100644
index 0000000..ab5f113
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_project.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_PROJECT_H_
+#define _TF_PROJECT_H_
+
+/* Wh+ support enabled */
+#ifndef TF_SUPPORT_P4
+#define TF_SUPPORT_P4 1
+#endif
+
+/* Shadow DB Support */
+#ifndef TF_SHADOW
+#define TF_SHADOW 0
+#endif
+
+/* Shared memory for session */
+#ifndef TF_SHARED
+#define TF_SHARED 0
+#endif
+
+#endif /* _TF_PROJECT_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
new file mode 100644
index 0000000..160abac
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_RESOURCES_H_
+#define _TF_RESOURCES_H_
+
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/** HW Resource types
+ */
+enum tf_resource_type_hw {
+	/* Common HW resources for all chip variants */
+	TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+	TF_RESC_TYPE_HW_PROF_FUNC,
+	TF_RESC_TYPE_HW_PROF_TCAM,
+	TF_RESC_TYPE_HW_EM_PROF_ID,
+	TF_RESC_TYPE_HW_EM_REC,
+	TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+	TF_RESC_TYPE_HW_WC_TCAM,
+	TF_RESC_TYPE_HW_METER_PROF,
+	TF_RESC_TYPE_HW_METER_INST,
+	TF_RESC_TYPE_HW_MIRROR,
+	TF_RESC_TYPE_HW_UPAR,
+	/* Wh+/Brd2 specific HW resources */
+	TF_RESC_TYPE_HW_SP_TCAM,
+	/* Brd2/Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_L2_FUNC,
+	/* Brd3, Brd4 common HW resources */
+	TF_RESC_TYPE_HW_FKB,
+	/* Brd4 specific HW resources */
+	TF_RESC_TYPE_HW_TBL_SCOPE,
+	TF_RESC_TYPE_HW_EPOCH0,
+	TF_RESC_TYPE_HW_EPOCH1,
+	TF_RESC_TYPE_HW_METADATA,
+	TF_RESC_TYPE_HW_CT_STATE,
+	TF_RESC_TYPE_HW_RANGE_PROF,
+	TF_RESC_TYPE_HW_RANGE_ENTRY,
+	TF_RESC_TYPE_HW_LAG_ENTRY,
+	TF_RESC_TYPE_HW_MAX
+};
+#endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
new file mode 100644
index 0000000..5164d6b
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef TF_RM_H_
+#define TF_RM_H_
+
+#include "tf_resources.h"
+#include "tf_core.h"
+
+struct tf;
+struct tf_session;
+
+/**
+ * Resource query single entry
+ */
+struct tf_rm_query_entry {
+	/** Minimum guaranteed number of elements */
+	uint16_t min;
+	/** Maximum non-guaranteed number of elements */
+	uint16_t max;
+};
+
+/**
+ * Resource query array of HW entities
+ */
+struct tf_rm_hw_query {
+	/** array of HW resource entries */
+	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
+};
+
+#endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
new file mode 100644
index 0000000..32e53c0
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -0,0 +1,85 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_SESSION_H_
+#define _TF_SESSION_H_
+
+#include <stdint.h>
+#include <stdlib.h>
+
+#include "tf_core.h"
+#include "tf_rm.h"
+
+/** Session defines
+ */
+#define TF_SESSIONS_MAX	          1          /** max # sessions */
+#define TF_SESSION_ID_INVALID     0xFFFFFFFF /** Invalid Session ID define */
+
+/** Session
+ *
+ * Shared memory containing private TruFlow session information.
+ * Through this structure the session can keep track of resource
+ * allocations and (if so configured) any shadow copy of flow
+ * information.
+ *
+ * Memory is assigned to the Truflow instance by way of
+ * tf_open_session. Memory is allocated and owned by i.e. ULP.
+ *
+ * Access control to this shared memory is handled by the spin_lock in
+ * tf_session_info.
+ */
+struct tf_session {
+	/** TrueFlow Version. Used to control the structure layout
+	 * when sharing sessions. No guarantee that a secondary
+	 * process would come from the same version of an executable.
+	 */
+	struct tf_session_version ver;
+
+	/** Device type, provided by tf_open_session().
+	 */
+	enum tf_device_type device_type;
+
+	/** Session ID, allocated by FW on tf_open_session().
+	 */
+	union tf_session_id session_id;
+
+	/**
+	 * String containing name of control channel interface to be
+	 * used for this session to communicate with firmware.
+	 *
+	 * ctrl_chan_name will be used as part of a name for any
+	 * shared memory allocation.
+	 */
+	char ctrl_chan_name[TF_SESSION_NAME_MAX];
+
+	/**
+	 * Boolean controlling the use and availability of shadow
+	 * copy. Shadow copy will allow the TruFlow Core to keep track
+	 * of resource content on the firmware side without having to
+	 * query firmware. Additional private session core_data will
+	 * be allocated if this boolean is set to 'true', default
+	 * 'false'.
+	 *
+	 * Size of memory depends on the NVM Resource settings for the
+	 * control channel.
+	 */
+	bool shadow_copy;
+
+	/**
+	 * Session Reference Count. To keep track of functions per
+	 * session the ref_count is incremented. There is also a
+	 * parallel TruFlow Firmware ref_count in case the TruFlow
+	 * Core goes away without informing the Firmware.
+	 */
+	uint8_t ref_count;
+
+	/** CRC32 seed table */
+#define TF_LKUP_SEED_MEM_SIZE 512
+	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+	/** Lookup3 init values */
+	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+};
+#endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tfp.c b/drivers/net/bnxt/tf_core/tfp.c
new file mode 100644
index 0000000..a769f2f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.c
@@ -0,0 +1,163 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * see the individual elements.
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_byteorder.h>
+#include <rte_config.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_lcore.h>
+#include <rte_log.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_spinlock.h>
+
+#include "tf_core.h"
+#include "tfp.h"
+#include "bnxt.h"
+#include "bnxt_hwrm.h"
+#include "tf_msg_common.h"
+
+/**
+ * Sends TruFlow msg to the TruFlow Firmware using
+ * a message specific HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_direct(struct tf *tfp,
+		    struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_direct(container_of(tfp,
+					       struct bnxt,
+					       tfp),
+					 use_kong_mb,
+					 parms->tf_type,
+					 parms->req_data,
+					 parms->req_size,
+					 parms->resp_data,
+					 parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Sends preformatted TruFlow msg to the TruFlow Firmware using
+ * the Truflow tunnel HWRM message type.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_send_msg_tunneled(struct tf *tfp,
+		      struct tfp_send_msg_parms *parms)
+{
+	int      rc = 0;
+	uint8_t  use_kong_mb = 1;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	if (parms->mailbox == TF_CHIMP_MB)
+		use_kong_mb = 0;
+
+	rc = bnxt_hwrm_tf_message_tunneled(container_of(tfp,
+						  struct bnxt,
+						  tfp),
+					   use_kong_mb,
+					   parms->tf_type,
+					   parms->tf_subtype,
+					   &parms->tf_resp_code,
+					   parms->req_data,
+					   parms->req_size,
+					   parms->resp_data,
+					   parms->resp_size);
+
+	return rc;
+}
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * Returns success or failure code.
+ */
+int
+tfp_calloc(struct tfp_calloc_parms *parms)
+{
+	if (parms == NULL)
+		return -EINVAL;
+
+	parms->mem_va = rte_zmalloc("tf",
+				    (parms->nitems * parms->size),
+				    parms->alignment);
+	if (parms->mem_va == NULL) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
+		return -ENOMEM;
+	}
+
+	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
+	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
+		PMD_DRV_LOG(ERR, "Allocate failed mem_pa\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+/**
+ * Frees the memory space pointed to by the provided pointer. The
+ * pointer must have been returned from the tfp_calloc().
+ */
+void
+tfp_free(void *addr)
+{
+	rte_free(addr);
+}
+
+/**
+ * Copies n bytes from src memory to dest memory. The memory areas
+ * must not overlap.
+ */
+void
+tfp_memcpy(void *dest, void *src, size_t n)
+{
+	rte_memcpy(dest, src, n);
+}
+
+/**
+ * Used to initialize portable spin lock
+ */
+void
+tfp_spinlock_init(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_init(&parms->slock);
+}
+
+/**
+ * Used to lock portable spin lock
+ */
+void
+tfp_spinlock_lock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_lock(&parms->slock);
+}
+
+/**
+ * Used to unlock portable spin lock
+ */
+void
+tfp_spinlock_unlock(struct tfp_spinlock_parms *parms)
+{
+	rte_spinlock_unlock(&parms->slock);
+}
diff --git a/drivers/net/bnxt/tf_core/tfp.h b/drivers/net/bnxt/tf_core/tfp.h
new file mode 100644
index 0000000..8d5e94e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tfp.h
@@ -0,0 +1,188 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* This header file defines the Portability structures and APIs for
+ * TruFlow.
+ */
+
+#ifndef _TFP_H_
+#define _TFP_H_
+
+#include <rte_spinlock.h>
+
+/** Spinlock
+ */
+struct tfp_spinlock_parms {
+	rte_spinlock_t slock;
+};
+
+/**
+ * @file
+ *
+ * TrueFlow Portability API Header File
+ */
+
+/** send message parameter definition
+ */
+struct tfp_send_msg_parms {
+	/**
+	 * [in] mailbox, specifying the Mailbox to send the command on.
+	 */
+	uint32_t  mailbox;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_type.
+	 */
+	uint16_t  tf_type;
+	/**
+	 * [in] tlv_subtype, specifies the tlv_subtype.
+	 */
+	uint16_t  tf_subtype;
+	/**
+	 * [out] tf_resp_code, response code from the internal tlv
+	 *       message. Only supported on tunneled messages.
+	 */
+	uint32_t tf_resp_code;
+	/**
+	 * [out] size, number specifying the request size of the data in bytes
+	 */
+	uint32_t req_size;
+	/**
+	 * [in] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *req_data;
+	/**
+	 * [out] size, number specifying the response size of the data in bytes
+	 */
+	uint32_t resp_size;
+	/**
+	 * [out] data, pointer to the data to be sent within the HWRM command
+	 */
+	uint32_t *resp_data;
+};
+
+/** calloc parameter definition
+ */
+struct tfp_calloc_parms {
+	/**
+	 * [in] nitems, number specifying number of items to allocate.
+	 */
+	size_t nitems;
+	/**
+	 * [in] size, number specifying the size of each memory item
+	 *      requested. Size is in bytes.
+	 */
+	size_t size;
+	/**
+	 * [in] alignment, number indicates byte alignment required. 0
+	 *      - don't care, 16 - 16 byte alignment, 4K - 4K alignment etc
+	 */
+	size_t alignment;
+	/**
+	 * [out] mem_va, pointer to the allocated memory.
+	 */
+	void *mem_va;
+	/**
+	 * [out] mem_pa, physical address of the allocated memory.
+	 */
+	void *mem_pa;
+};
+
+/**
+ * @page Portability
+ *
+ * @ref tfp_send_direct
+ * @ref tfp_send_msg_tunneled
+ *
+ * @ref tfp_calloc
+ * @ref tfp_free
+ * @ref tfp_memcpy
+ *
+ * @ref tfp_spinlock_init
+ * @ref tfp_spinlock_lock
+ * @ref tfp_spinlock_unlock
+ *
+ * @ref tfp_cpu_to_le_16
+ * @ref tfp_le_to_cpu_16
+ * @ref tfp_cpu_to_le_32
+ * @ref tfp_le_to_cpu_32
+ * @ref tfp_cpu_to_le_64
+ * @ref tfp_le_to_cpu_64
+ * @ref tfp_cpu_to_be_16
+ * @ref tfp_be_to_cpu_16
+ * @ref tfp_cpu_to_be_32
+ * @ref tfp_be_to_cpu_32
+ * @ref tfp_cpu_to_be_64
+ * @ref tfp_be_to_cpu_64
+ */
+
+#define tfp_cpu_to_le_16(val) rte_cpu_to_le_16(val)
+#define tfp_le_to_cpu_16(val) rte_le_to_cpu_16(val)
+#define tfp_cpu_to_le_32(val) rte_cpu_to_le_32(val)
+#define tfp_le_to_cpu_32(val) rte_le_to_cpu_32(val)
+#define tfp_cpu_to_le_64(val) rte_cpu_to_le_64(val)
+#define tfp_le_to_cpu_64(val) rte_le_to_cpu_64(val)
+#define tfp_cpu_to_be_16(val) rte_cpu_to_be_16(val)
+#define tfp_be_to_cpu_16(val) rte_be_to_cpu_16(val)
+#define tfp_cpu_to_be_32(val) rte_cpu_to_be_32(val)
+#define tfp_be_to_cpu_32(val) rte_be_to_cpu_32(val)
+#define tfp_cpu_to_be_64(val) rte_cpu_to_be_64(val)
+#define tfp_be_to_cpu_64(val) rte_be_to_cpu_64(val)
+#define tfp_bswap_16(val) rte_bswap16(val)
+#define tfp_bswap_32(val) rte_bswap32(val)
+#define tfp_bswap_64(val) rte_bswap64(val)
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_direct(struct tf *tfp,
+			struct tfp_send_msg_parms *parms);
+
+/**
+ * Provides communication capability from the TrueFlow API layer to
+ * the TrueFlow firmware. The portability layer internally provides
+ * the transport to the firmware.
+ *
+ * [in] session, pointer to session handle
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -1             - Global error like not supported
+ *   -EINVAL        - Parameter Error
+ */
+int tfp_send_msg_tunneled(struct tf                 *tfp,
+			  struct tfp_send_msg_parms *parms);
+
+/**
+ * Allocates zero'ed memory from the heap.
+ *
+ * NOTE: Also performs virt2phy address conversion by default thus is
+ * can be expensive to invoke.
+ *
+ * [in] parms, parameter structure
+ *
+ * Returns:
+ *   0              - Success
+ *   -ENOMEM        - No memory available
+ *   -EINVAL        - Parameter error
+ */
+int tfp_calloc(struct tfp_calloc_parms *parms);
+
+void tfp_free(void *addr);
+void tfp_memcpy(void *dest, void *src, size_t n);
+void tfp_spinlock_init(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_lock(struct tfp_spinlock_parms *slock);
+void tfp_spinlock_unlock(struct tfp_spinlock_parms *slock);
+#endif /* _TFP_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (3 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-16 17:39         ` Ferruh Yigit
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
                         ` (31 subsequent siblings)
  36 siblings, 1 reply; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session and resource support functions
- Add Truflow session close API and related message support functions
  for both session and hw resources

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/bitalloc.c     | 364 +++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/bitalloc.h     | 119 ++++++++++
 drivers/net/bnxt/tf_core/tf_core.c      |  86 +++++++
 drivers/net/bnxt/tf_core/tf_msg.c       | 401 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  42 ++++
 drivers/net/bnxt/tf_core/tf_resources.h |  24 +-
 drivers/net/bnxt/tf_core/tf_rm.h        | 113 +++++++++
 drivers/net/bnxt/tf_core/tf_session.h   |   1 +
 9 files changed, 1146 insertions(+), 5 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
 create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 8a68059..8474673 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -48,6 +48,7 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c
new file mode 100644
index 0000000..fb4df9a
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bitalloc.h"
+
+#define BITALLOC_MAX_LEVELS 6
+
+/* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */
+static int
+ba_ffs(bitalloc_word_t v)
+{
+	int c; /* c will be the number of zero bits on the right plus 1 */
+
+	v &= -v;
+	c = v ? 32 : 0;
+
+	if (v & 0x0000FFFF)
+		c -= 16;
+	if (v & 0x00FF00FF)
+		c -= 8;
+	if (v & 0x0F0F0F0F)
+		c -= 4;
+	if (v & 0x33333333)
+		c -= 2;
+	if (v & 0x55555555)
+		c -= 1;
+
+	return c;
+}
+
+int
+ba_init(struct bitalloc *pool, int size)
+{
+	bitalloc_word_t *mem = (bitalloc_word_t *)pool;
+	int       i;
+
+	/* Initialize */
+	pool->size = 0;
+
+	if (size < 1 || size > BITALLOC_MAX_SIZE)
+		return -1;
+
+	/* Zero structure */
+	for (i = 0;
+	     i < (int)(BITALLOC_SIZEOF(size) / sizeof(bitalloc_word_t));
+	     i++)
+		mem[i] = 0;
+
+	/* Initialize */
+	pool->size = size;
+
+	/* Embed number of words of next level, after each level */
+	int words[BITALLOC_MAX_LEVELS];
+	int lev = 0;
+	int offset = 0;
+
+	words[0] = (size + 31) / 32;
+	while (words[lev] > 1) {
+		lev++;
+		words[lev] = (words[lev - 1] + 31) / 32;
+	}
+
+	while (lev) {
+		offset += words[lev];
+		pool->storage[offset++] = words[--lev];
+	}
+
+	/* Free the entire pool */
+	for (i = 0; i < size; i++)
+		ba_free(pool, i);
+
+	return 0;
+}
+
+static int
+ba_alloc_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int              index,
+		int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc = ba_ffs(storage[index]);
+	int       r;
+
+	if (loc == 0)
+		return -1;
+
+	loc--;
+
+	if (pool->size > size) {
+		r = ba_alloc_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index * 32 + loc,
+				    clear);
+	} else {
+		r = index * 32 + loc;
+		*clear = 1;
+		pool->free_count--;
+	}
+
+	if (*clear) {
+		storage[index] &= ~(1 << loc);
+		*clear = (storage[index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc(struct bitalloc *pool)
+{
+	int clear = 0;
+
+	return ba_alloc_helper(pool, 0, 1, 32, 0, &clear);
+}
+
+static int
+ba_alloc_index_helper(struct bitalloc *pool,
+		      int              offset,
+		      int              words,
+		      unsigned int     size,
+		      int             *index,
+		      int             *clear)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_alloc_index_helper(pool,
+					  offset + words + 1,
+					  storage[words],
+					  size * 32,
+					  index,
+					  clear);
+	else
+		r = 1; /* Check if already allocated */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? 0 : -1;
+		if (r == 0) {
+			*clear = 1;
+			pool->free_count--;
+		}
+	}
+
+	if (*clear) {
+		storage[*index] &= ~(1 << loc);
+		*clear = (storage[*index] == 0);
+	}
+
+	return r;
+}
+
+int
+ba_alloc_index(struct bitalloc *pool, int index)
+{
+	int clear = 0;
+	int index_copy = index;
+
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0)
+		return index;
+	else
+		return -1;
+}
+
+static int
+ba_inuse_helper(struct bitalloc *pool,
+		int              offset,
+		int              words,
+		unsigned int     size,
+		int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_inuse_helper(pool,
+				    offset + words + 1,
+				    storage[words],
+				    size * 32,
+				    index);
+	else
+		r = 1; /* Check if in use */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1)
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+
+	return r;
+}
+
+int
+ba_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_inuse_helper(pool, 0, 1, 32, &index) == 0;
+}
+
+static int
+ba_free_helper(struct bitalloc *pool,
+	       int              offset,
+	       int              words,
+	       unsigned int     size,
+	       int             *index)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc;
+	int       r;
+
+	if (pool->size > size)
+		r = ba_free_helper(pool,
+				   offset + words + 1,
+				   storage[words],
+				   size * 32,
+				   index);
+	else
+		r = 1; /* Check if already free */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (r == 1) {
+		r = (storage[*index] & (1 << loc)) ? -1 : 0;
+		if (r == 0)
+			pool->free_count++;
+	}
+
+	if (r == 0)
+		storage[*index] |= (1 << loc);
+
+	return r;
+}
+
+int
+ba_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index);
+}
+
+int
+ba_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 || index >= (int)pool->size)
+		return -1;
+
+	return ba_free_helper(pool, 0, 1, 32, &index) + 1;
+}
+
+int
+ba_free_count(struct bitalloc *pool)
+{
+	return (int)pool->free_count;
+}
+
+int ba_inuse_count(struct bitalloc *pool)
+{
+	return (int)(pool->size) - (int)(pool->free_count);
+}
+
+static int
+ba_find_next_helper(struct bitalloc *pool,
+		    int              offset,
+		    int              words,
+		    unsigned int     size,
+		    int             *index,
+		    int              free)
+{
+	bitalloc_word_t *storage = &pool->storage[offset];
+	int       loc, r, bottom = 0;
+
+	if (pool->size > size)
+		r = ba_find_next_helper(pool,
+					offset + words + 1,
+					storage[words],
+					size * 32,
+					index,
+					free);
+	else
+		bottom = 1; /* Bottom of tree */
+
+	loc = (*index % 32);
+	*index = *index / 32;
+
+	if (bottom) {
+		int bit_index = *index * 32;
+
+		loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc));
+		if (loc > 0) {
+			loc--;
+			r = (bit_index + loc);
+			if (r >= (int)pool->size)
+				r = -1;
+		} else {
+			/* Loop over array at bottom of tree */
+			r = -1;
+			bit_index += 32;
+			*index = *index + 1;
+			while ((int)pool->size > bit_index) {
+				loc = ba_ffs(~storage[*index]);
+
+				if (loc > 0) {
+					loc--;
+					r = (bit_index + loc);
+					if (r >= (int)pool->size)
+						r = -1;
+					break;
+				}
+				bit_index += 32;
+				*index = *index + 1;
+			}
+		}
+	}
+
+	if (r >= 0 && (free)) {
+		if (bottom)
+			pool->free_count++;
+		storage[*index] |= (1 << loc);
+	}
+
+	return r;
+}
+
+int
+ba_find_next_inuse(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 0);
+}
+
+int
+ba_find_next_inuse_free(struct bitalloc *pool, int index)
+{
+	if (index < 0 ||
+	    index >= (int)pool->size ||
+	    pool->free_count == pool->size)
+		return -1;
+
+	return ba_find_next_helper(pool, 0, 1, 32, &index, 1);
+}
diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h
new file mode 100644
index 0000000..563c853
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/bitalloc.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BITALLOC_H_
+#define _BITALLOC_H_
+
+#include <stdint.h>
+
+/* Bitalloc works on uint32_t as its word size */
+typedef uint32_t bitalloc_word_t;
+
+struct bitalloc {
+	bitalloc_word_t size;
+	bitalloc_word_t free_count;
+	bitalloc_word_t storage[1];
+};
+
+#define BA_L0(s) (((s) + 31) / 32)
+#define BA_L1(s) ((BA_L0(s) + 31) / 32)
+#define BA_L2(s) ((BA_L1(s) + 31) / 32)
+#define BA_L3(s) ((BA_L2(s) + 31) / 32)
+#define BA_L4(s) ((BA_L3(s) + 31) / 32)
+
+#define BITALLOC_SIZEOF(size)                                    \
+	(sizeof(struct bitalloc) *				 \
+	 (((sizeof(struct bitalloc) +				 \
+	    sizeof(struct bitalloc) - 1 +			 \
+	    (sizeof(bitalloc_word_t) *				 \
+	     ((BA_L0(size) - 1) +				 \
+	      ((BA_L0(size) == 1) ? 0 : (BA_L1(size) + 1)) +	 \
+	      ((BA_L1(size) == 1) ? 0 : (BA_L2(size) + 1)) +	 \
+	      ((BA_L2(size) == 1) ? 0 : (BA_L3(size) + 1)) +	 \
+	      ((BA_L3(size) == 1) ? 0 : (BA_L4(size) + 1)))))) / \
+	  sizeof(struct bitalloc)))
+
+#define BITALLOC_MAX_SIZE (32 * 32 * 32 * 32 * 32 * 32)
+
+/* The instantiation of a bitalloc looks a bit odd. Since a
+ * bit allocator has variable storage, we need a way to get a
+ * a pointer to a bitalloc structure that points to the correct
+ * amount of storage. We do this by creating an array of
+ * bitalloc where the first element in the array is the
+ * actual bitalloc base structure, and the remaining elements
+ * in the array provide the storage for it. This approach allows
+ * instances to be individual variables or members of larger
+ * structures.
+ */
+#define BITALLOC_INST(name, size)                      \
+	struct bitalloc name[(BITALLOC_SIZEOF(size) /  \
+			      sizeof(struct bitalloc))]
+
+/* Symbolic return codes */
+#define BA_SUCCESS           0
+#define BA_FAIL             -1
+#define BA_ENTRY_FREE        0
+#define BA_ENTRY_IN_USE      1
+#define BA_NO_ENTRY_FOUND   -1
+
+/**
+ * Initializates the bitallocator
+ *
+ * Returns 0 on success, -1 on failure.  Size is arbitrary up to
+ * BITALLOC_MAX_SIZE
+ */
+int ba_init(struct bitalloc *pool, int size);
+
+/**
+ * Returns -1 on failure, or index of allocated entry
+ */
+int ba_alloc(struct bitalloc *pool);
+int ba_alloc_index(struct bitalloc *pool, int index);
+
+/**
+ * Query a particular index in a pool to check if its in use.
+ *
+ * Returns -1 on invalid index, 1 if the index is allocated, 0 if it
+ * is free
+ */
+int ba_inuse(struct bitalloc *pool, int index);
+
+/**
+ * Variant of ba_inuse that frees the index if it is allocated, same
+ * return codes as ba_inuse
+ */
+int ba_inuse_free(struct bitalloc *pool, int index);
+
+/**
+ * Find next index that is in use, start checking at index 'idx'
+ *
+ * Returns next index that is in use on success, or
+ * -1 if no in use index is found
+ */
+int ba_find_next_inuse(struct bitalloc *pool, int idx);
+
+/**
+ * Variant of ba_find_next_inuse that also frees the next in use index,
+ * same return codes as ba_find_next_inuse
+ */
+int ba_find_next_inuse_free(struct bitalloc *pool, int idx);
+
+/**
+ * Multiple freeing of the same index has no negative side effects,
+ * but will return -1.  returns -1 on failure, 0 on success.
+ */
+int ba_free(struct bitalloc *pool, int index);
+
+/**
+ * Returns the pool's free count
+ */
+int ba_free_count(struct bitalloc *pool);
+
+/**
+ * Returns the pool's in use count
+ */
+int ba_inuse_count(struct bitalloc *pool);
+
+#endif /* _BITALLOC_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 6bafae5..3c5d55d 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,10 +7,18 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
+#include "bitalloc.h"
 #include "bnxt.h"
 
+static inline uint32_t SWAP_WORDS32(uint32_t val32)
+{
+	return (((val32 & 0x0000ffff) << 16) |
+		((val32 & 0xffff0000) >> 16));
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -141,5 +149,83 @@ tf_open_session(struct tf                    *tfp,
 	return rc;
 
  cleanup_close:
+	tf_close_session(tfp);
 	return -EINVAL;
 }
+
+int
+tf_attach_session(struct tf *tfp __rte_unused,
+		  struct tf_attach_session_parms *parms __rte_unused)
+{
+#if (TF_SHARED == 1)
+	int rc;
+
+	if (tfp == NULL)
+		return -EINVAL;
+
+	/* - Open the shared memory for the attach_chan_name
+	 * - Point to the shared session for this Device instance
+	 * - Check that session is valid
+	 * - Attach to the firmware so it can record there is more
+	 *   than one client of the session.
+	 */
+
+	if (tfp->session) {
+		if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) {
+			rc = tf_msg_session_attach(tfp,
+						   parms->ctrl_chan_name,
+						   parms->session_id);
+		}
+	}
+#endif /* TF_SHARED */
+	return -1;
+}
+
+int
+tf_close_session(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	struct tf_session *tfs;
+	union tf_session_id session_id;
+
+	if (tfp == NULL || tfp->session == NULL)
+		return -EINVAL;
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
+		rc = tf_msg_session_close(tfp);
+		if (rc) {
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "Message send failed, rc:%d\n",
+				    rc);
+		}
+
+		/* Update the ref_count */
+		tfs->ref_count--;
+	}
+
+	session_id = tfs->session_id;
+
+	/* Final cleanup as we're last user of the session */
+	if (tfs->ref_count == 0) {
+		tfp_free(tfp->session->core_data);
+		tfp_free(tfp->session);
+		tfp->session = NULL;
+	}
+
+	PMD_DRV_LOG(INFO,
+		    "Session closed, session_id:%d\n",
+		    session_id.id);
+
+	PMD_DRV_LOG(INFO,
+		    "domain:%d, bus:%d, device:%d, fw_session_id:%d\n",
+		    session_id.internal.domain,
+		    session_id.internal.bus,
+		    session_id.internal.device,
+		    session_id.internal.fw_session_id);
+
+	return rc_close;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 2b68681..e05aea7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -18,6 +18,82 @@
 #include "hwrm_tf.h"
 
 /**
+ * Endian converts min and max values from the HW response to the query
+ */
+#define TF_HW_RESP_TO_QUERY(query, index, response, element) do {            \
+	(query)->hw_query[index].min =                                       \
+		tfp_le_to_cpu_16(response. element ## _min);                 \
+	(query)->hw_query[index].max =                                       \
+		tfp_le_to_cpu_16(response. element ## _max);                 \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the alloc to the request
+ */
+#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element)                   \
+	(request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do {            \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(hw_entry[index].start);                     \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(hw_entry[index].stride);                    \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do {         \
+	hw_entry[index].start =                                              \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	hw_entry[index].stride =                                             \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
+ * Endian converts min and max values from the SRAM response to the
+ * query
+ */
+#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do {          \
+	(query)->sram_query[index].min =                                     \
+		tfp_le_to_cpu_16(response.element ## _min);                  \
+	(query)->sram_query[index].max =                                     \
+		tfp_le_to_cpu_16(response.element ## _max);                  \
+} while (0)
+
+/**
+ * Endian converts the number of entries from the action (alloc) to
+ * the request
+ */
+#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element)                \
+	(request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index]))
+
+/**
+ * Endian converts the start and stride value from the free to the request
+ */
+#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do {        \
+	request.element ## _start =                                          \
+		tfp_cpu_to_le_16(sram_entry[index].start);                   \
+	request.element ## _stride =                                         \
+		tfp_cpu_to_le_16(sram_entry[index].stride);                  \
+} while (0)
+
+/**
+ * Endian converts the start and stride from the HW response to the
+ * alloc
+ */
+#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do {     \
+	sram_entry[index].start =                                            \
+		tfp_le_to_cpu_16(response.element ## _start);                \
+	sram_entry[index].stride =                                           \
+		tfp_le_to_cpu_16(response.element ## _stride);               \
+} while (0)
+
+/**
  * Sends session open request to TF Firmware
  */
 int
@@ -51,6 +127,45 @@ tf_msg_session_open(struct tf *tfp,
 }
 
 /**
+ * Sends session attach request to TF Firmware
+ */
+int
+tf_msg_session_attach(struct tf *tfp __rte_unused,
+		      char *ctrl_chan_name __rte_unused,
+		      uint8_t tf_fw_session_id __rte_unused)
+{
+	return -1;
+}
+
+/**
+ * Sends session close request to TF Firmware
+ */
+int
+tf_msg_session_close(struct tf *tfp)
+{
+	int rc;
+	struct hwrm_tf_session_close_input req = { 0 };
+	struct hwrm_tf_session_close_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+
+	parms.tf_type = HWRM_TF_SESSION_CLOSE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
  * Sends session query config request to TF Firmware
  */
 int
@@ -77,3 +192,289 @@ tf_msg_session_qcfg(struct tf *tfp)
 				 &parms);
 	return rc;
 }
+
+/**
+ * Sends session HW resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_qcaps(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_query *query)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_qcaps_input req = { 0 };
+	struct tf_session_hw_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(query, 0, sizeof(*query));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST,
+			    resp, meter_inst);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused,
+			     enum tf_dir dir,
+			     struct tf_rm_hw_alloc *hw_alloc __rte_unused,
+			     struct tf_rm_entry *hw_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_alloc_input req = { 0 };
+	struct tf_session_hw_resc_alloc_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			   l2_ctx_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			   prof_func_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			   prof_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			   em_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req,
+			   em_record_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			   wc_tcam_prof_id);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req,
+			   wc_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req,
+			   meter_profiles);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req,
+			   meter_inst);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req,
+			   mirrors);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req,
+			   upar);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req,
+			   sp_tcam_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req,
+			   l2_func);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req,
+			   flex_key_templ);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			   tbl_scope);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req,
+			   epoch0_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req,
+			   epoch1_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req,
+			   metadata);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req,
+			   ct_state);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			   range_prof);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			   range_entries);
+	TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			   lag_tbl_entries);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_HW_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp,
+			    l2_ctx_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp,
+			    prof_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp,
+			    prof_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp,
+			    em_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp,
+			    em_record_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp,
+			    wc_tcam_prof_id);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp,
+			    wc_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp,
+			    meter_profiles);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp,
+			    meter_inst);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp,
+			    mirrors);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp,
+			    upar);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp,
+			    sp_tcam_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp,
+			    l2_func);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp,
+			    flex_key_templ);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp,
+			    tbl_scope);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp,
+			    epoch0_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp,
+			    epoch1_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp,
+			    metadata);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp,
+			    ct_state);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp,
+			    range_prof);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp,
+			    range_entries);
+	TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp,
+			    lag_tbl_entries);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_free(struct tf *tfp,
+			    enum tf_dir dir,
+			    struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(hw_entry, 0, sizeof(*hw_entry));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_HW_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 20ebf2e..da5ccf3 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -30,6 +30,34 @@ int tf_msg_session_open(struct tf *tfp,
 			uint8_t *fw_session_id);
 
 /**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * [in] fw_session_id
+ *   Pointer to the fw_session_id that is assigned to the session at
+ *   time of session open
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_attach(struct tf *tfp,
+			  char *ctrl_channel_name,
+			  uint8_t tf_fw_session_id);
+
+/**
+ * Sends session close request to Firmware
+ *
+ * [in] session
+ *   Pointer to session handle
+ *
+ * Returns:
+ *
+ */
+int tf_msg_session_close(struct tf *tfp);
+
+/**
  * Sends session query config request to TF Firmware
  */
 int tf_msg_session_qcfg(struct tf *tfp);
@@ -41,4 +69,18 @@ int tf_msg_session_hw_resc_qcaps(struct tf *tfp,
 				 enum tf_dir dir,
 				 struct tf_rm_hw_query *hw_query);
 
+/**
+ * Sends session HW resource allocation request to TF Firmware
+ */
+int tf_msg_session_hw_resc_alloc(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_hw_alloc *hw_alloc,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource free request to TF Firmware
+ */
+int tf_msg_session_hw_resc_free(struct tf *tfp,
+				enum tf_dir dir,
+				struct tf_rm_entry *hw_entry);
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 160abac..8dbb2f9 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,11 +6,6 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
-/*
- * Hardware specific MAX values
- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
- */
-
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -43,4 +38,23 @@ enum tf_resource_type_hw {
 	TF_RESC_TYPE_HW_LAG_ENTRY,
 	TF_RESC_TYPE_HW_MAX
 };
+
+/** HW Resource types
+ */
+enum tf_resource_type_sram {
+	TF_RESC_TYPE_SRAM_FULL_ACTION,
+	TF_RESC_TYPE_SRAM_MCG,
+	TF_RESC_TYPE_SRAM_ENCAP_8B,
+	TF_RESC_TYPE_SRAM_ENCAP_16B,
+	TF_RESC_TYPE_SRAM_ENCAP_64B,
+	TF_RESC_TYPE_SRAM_SP_SMAC,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+	TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+	TF_RESC_TYPE_SRAM_COUNTER_64B,
+	TF_RESC_TYPE_SRAM_NAT_SPORT,
+	TF_RESC_TYPE_SRAM_NAT_DPORT,
+	TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+	TF_RESC_TYPE_SRAM_MAX
+};
 #endif /* _TF_RESOURCES_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 5164d6b..57ce19b 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -8,10 +8,52 @@
 
 #include "tf_resources.h"
 #include "tf_core.h"
+#include "bitalloc.h"
 
 struct tf;
 struct tf_session;
 
+/* Internal macro to determine appropriate allocation pools based on
+ * DIRECTION parm, also performs error checking for DIRECTION parm. The
+ * SESSION_POOL and SESSION pointers are set appropriately upon
+ * successful return (the GLOBAL_POOL is used to globally manage
+ * resource allocation and the SESSION_POOL is used to track the
+ * resources that have been allocated to the session)
+ *
+ * parameters:
+ *   struct tfp        *tfp
+ *   enum tf_dir        direction
+ *   struct bitalloc  **session_pool
+ *   string             base_pool_name - used to form pointers to the
+ *					 appropriate bit allocation
+ *					 pools, both directions of the
+ *					 session pools must have same
+ *					 base name, for example if
+ *					 POOL_NAME is feat_pool: - the
+ *					 ptr's to the session pools
+ *					 are feat_pool_rx feat_pool_tx
+ *
+ *  int                  rc - return code
+ *			      0 - Success
+ *			     -1 - invalid DIRECTION parm
+ */
+#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \
+		(rc) = 0;						\
+		if ((direction) == TF_DIR_RX) {				\
+			*(session_pool) = (tfs)->pool_name ## _RX;	\
+		} else if ((direction) == TF_DIR_TX) {			\
+			*(session_pool) = (tfs)->pool_name ## _TX;	\
+		} else {						\
+			rc = -1;					\
+		}							\
+	} while (0)
+
+#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _RX)
+
+#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)	\
+	(*(session_pool) = (tfs)->pool_name ## _TX)
+
 /**
  * Resource query single entry
  */
@@ -23,6 +65,16 @@ struct tf_rm_query_entry {
 };
 
 /**
+ * Resource single entry
+ */
+struct tf_rm_entry {
+	/** Starting index of the allocated resource */
+	uint16_t start;
+	/** Number of allocated elements */
+	uint16_t stride;
+};
+
+/**
  * Resource query array of HW entities
  */
 struct tf_rm_hw_query {
@@ -30,4 +82,65 @@ struct tf_rm_hw_query {
 	struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];
 };
 
+/**
+ * Resource allocation array of HW entities
+ */
+struct tf_rm_hw_alloc {
+	/** array of HW resource entries */
+	uint16_t hw_num[TF_RESC_TYPE_HW_MAX];
+};
+
+/**
+ * Resource query array of SRAM entities
+ */
+struct tf_rm_sram_query {
+	/** array of SRAM resource entries */
+	struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource allocation array of SRAM entities
+ */
+struct tf_rm_sram_alloc {
+	/** array of SRAM resource entries */
+	uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Initializes the Resource Manager and the associated database
+ * entries for HW and SRAM resources. Must be called before any other
+ * Resource Manager functions.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ */
+void tf_rm_init(struct tf *tfp);
+
+/**
+ * Allocates and validates both HW and SRAM resources per the NVM
+ * configuration. If any allocation fails all resources for the
+ * session is deallocated.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ */
+int tf_rm_allocate_validate(struct tf *tfp);
+
+/**
+ * Closes the Resource Manager and frees all allocated resources per
+ * the associated database.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * Returns
+ *   - (0) if successful.
+ *   - (-EINVAL) on failure.
+ *   - (-ENOTEMPTY) if resources are not cleaned up before close
+ */
+int tf_rm_close(struct tf *tfp);
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 32e53c0..651d3ee 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -9,6 +9,7 @@
 #include <stdint.h>
 #include <stdlib.h>
 
+#include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 06/34] net/bnxt: add tf core session sram functions
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (4 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
                         ` (30 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow session resource support functionality
- Add TruFlow session hw flush capability as well as
  sram support functions.
- Add resource definitions for session pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_core/rand.c         |  47 ++++
 drivers/net/bnxt/tf_core/rand.h         |  36 +++
 drivers/net/bnxt/tf_core/tf_core.c      |   1 +
 drivers/net/bnxt/tf_core/tf_msg.c       | 344 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h       |  37 +++
 drivers/net/bnxt/tf_core/tf_resources.h | 482 ++++++++++++++++++++++++++++++++
 7 files changed, 948 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/rand.c
 create mode 100644 drivers/net/bnxt/tf_core/rand.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 8474673..c39c098 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -50,6 +50,7 @@ endif
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/rand.c b/drivers/net/bnxt/tf_core/rand.c
new file mode 100644
index 0000000..32028df
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+
+#include <stdio.h>
+#include <stdint.h>
+#include "rand.h"
+
+#define TF_RAND_LFSR_INIT_VALUE 0xACE1u
+
+uint16_t lfsr = TF_RAND_LFSR_INIT_VALUE;
+uint32_t bit;
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ *   uint16_t number
+ */
+uint16_t rand16(void)
+{
+	bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5)) & 1;
+	return lfsr = (lfsr >> 1) | (bit << 15);
+}
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ *   uint32_t number
+ */
+uint32_t rand32(void)
+{
+	return (rand16() << 16) | rand16();
+}
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ */
+void rand_init(void)
+{
+	lfsr = TF_RAND_LFSR_INIT_VALUE;
+	bit = 0;
+}
diff --git a/drivers/net/bnxt/tf_core/rand.h b/drivers/net/bnxt/tf_core/rand.h
new file mode 100644
index 0000000..31cd76e
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/rand.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Random Number Functions */
+#ifndef __RAND_H__
+#define __RAND_H__
+
+/**
+ * Generates a 16 bit pseudo random number
+ *
+ * Returns:
+ * uint16_t number
+ *
+ */
+uint16_t rand16(void);
+
+/**
+ * Generates a 32 bit pseudo random number
+ *
+ * Returns:
+ * uint32_t number
+ *
+ */
+uint32_t rand32(void);
+
+/**
+ * Resets the seed used by the pseudo random number generator
+ *
+ * Returns:
+ *
+ */
+void rand_init(void);
+
+#endif /* __RAND_H__ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 3c5d55d..d82f746 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -12,6 +12,7 @@
 #include "tfp.h"
 #include "bitalloc.h"
 #include "bnxt.h"
+#include "rand.h"
 
 static inline uint32_t SWAP_WORDS32(uint32_t val32)
 {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index e05aea7..4ce2bc5 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -478,3 +478,347 @@ tf_msg_session_hw_resc_free(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int
+tf_msg_session_hw_resc_flush(struct tf *tfp,
+			     enum tf_dir dir,
+			     struct tf_rm_entry *hw_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_hw_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req,
+			  l2_ctx_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req,
+			  prof_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req,
+			  prof_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req,
+			  em_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req,
+			  em_record_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req,
+			  wc_tcam_prof_id);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req,
+			  wc_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req,
+			  meter_profiles);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req,
+			  meter_inst);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req,
+			  mirrors);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req,
+			  upar);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req,
+			  sp_tcam_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req,
+			  l2_func);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req,
+			  flex_key_templ);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req,
+			  tbl_scope);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req,
+			  epoch0_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req,
+			  epoch1_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req,
+			  metadata);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req,
+			  ct_state);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req,
+			  range_prof);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req,
+			  range_entries);
+	TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req,
+			  lag_tbl_entries);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_HW_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_qcaps(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_query *query __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_qcaps_input req = { 0 };
+	struct tf_session_sram_resc_qcaps_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_QCAPS,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_FULL_ACTION, resp,
+			      full_action);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, resp,
+			      sp_smac_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, resp,
+			      sp_smac_ipv6);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_QUERY(query, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_alloc(struct tf *tfp __rte_unused,
+			       enum tf_dir dir,
+			       struct tf_rm_sram_alloc *sram_alloc __rte_unused,
+			       struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_alloc_input req = { 0 };
+	struct tf_session_sram_resc_alloc_output resp;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	memset(&resp, 0, sizeof(resp));
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			     full_action);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_MCG, req,
+			     mcg);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			     encap_8b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			     encap_16b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			     encap_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			     sp_smac);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			     req, sp_smac_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			     req, sp_smac_ipv6);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_COUNTER_64B,
+			     req, counter_64b);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			     nat_sport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			     nat_dport);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			     nat_s_ipv4);
+	TF_SRAM_ALLOC_TO_REQ(sram_alloc, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			     nat_d_ipv4);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_SESSION_SRAM_RESC_ALLOC,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Process the response */
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION,
+			      resp, full_action);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_MCG, resp,
+			      mcg);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, resp,
+			      encap_8b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, resp,
+			      encap_16b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, resp,
+			      encap_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, resp,
+			      sp_smac);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			      resp, sp_smac_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			      resp, sp_smac_ipv6);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, resp,
+			      counter_64b);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, resp,
+			      nat_sport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, resp,
+			      nat_dport);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, resp,
+			      nat_s_ipv4);
+	TF_SRAM_RESP_TO_ALLOC(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, resp,
+			      nat_d_ipv4);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_free(struct tf *tfp __rte_unused,
+			      enum tf_dir dir,
+			      struct tf_rm_entry *sram_entry __rte_unused)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_SESSION_SRAM_RESC_FREE,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int
+tf_msg_session_sram_resc_flush(struct tf *tfp,
+			       enum tf_dir dir,
+			       struct tf_rm_entry *sram_entry)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_session_sram_resc_free_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_FULL_ACTION, req,
+			    full_action);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_MCG, req,
+			    mcg);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_8B, req,
+			    encap_8b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_16B, req,
+			    encap_16b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_ENCAP_64B, req,
+			    encap_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC, req,
+			    sp_smac);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, req,
+			    sp_smac_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, req,
+			    sp_smac_ipv6);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_COUNTER_64B, req,
+			    counter_64b);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_SPORT, req,
+			    nat_sport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_DPORT, req,
+			    nat_dport);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_S_IPV4, req,
+			    nat_s_ipv4);
+	TF_SRAM_FREE_TO_REQ(sram_entry, TF_RESC_TYPE_SRAM_NAT_D_IPV4, req,
+			    nat_d_ipv4);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 TF_TYPE_TRUFLOW,
+			 HWRM_TFT_SESSION_SRAM_RESC_FLUSH,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index da5ccf3..057de84 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -83,4 +83,41 @@ int tf_msg_session_hw_resc_alloc(struct tf *tfp,
 int tf_msg_session_hw_resc_free(struct tf *tfp,
 				enum tf_dir dir,
 				struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session HW resource flush request to TF Firmware
+ */
+int tf_msg_session_hw_resc_flush(struct tf *tfp,
+				 enum tf_dir dir,
+				 struct tf_rm_entry *hw_entry);
+
+/**
+ * Sends session SRAM resource query capability request to TF Firmware
+ */
+int tf_msg_session_sram_resc_qcaps(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_query *sram_query);
+
+/**
+ * Sends session SRAM resource allocation request to TF Firmware
+ */
+int tf_msg_session_sram_resc_alloc(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_sram_alloc *sram_alloc,
+				   struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource free request to TF Firmware
+ */
+int tf_msg_session_sram_resc_free(struct tf *tfp,
+				  enum tf_dir dir,
+				  struct tf_rm_entry *sram_entry);
+
+/**
+ * Sends session SRAM resource flush request to TF Firmware
+ */
+int tf_msg_session_sram_resc_flush(struct tf *tfp,
+				   enum tf_dir dir,
+				   struct tf_rm_entry *sram_entry);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h
index 8dbb2f9..05e131f 100644
--- a/drivers/net/bnxt/tf_core/tf_resources.h
+++ b/drivers/net/bnxt/tf_core/tf_resources.h
@@ -6,6 +6,487 @@
 #ifndef _TF_RESOURCES_H_
 #define _TF_RESOURCES_H_
 
+/*
+ * Hardware specific MAX values
+ * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI
+ */
+
+/* Common HW resources for all chip variants */
+#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM
+					    * entries
+					    */
+#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */
+#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile
+					    * TCAM
+					    */
+#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile
+					    * IDs
+					    */
+#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */
+#define TF_NUM_WC_TCAM_ROW        256      /*  Number slices per row in WC
+					    * TCAM. A slices is a WC TCAM entry.
+					    */
+#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */
+#define TF_NUM_METER             1024      /* < Number of meter instances */
+#define TF_NUM_MIRROR               2      /* < Number of mirror instances */
+#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */
+
+/* Wh+/Brd2 specific HW resources */
+#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM
+					    * entries
+					    */
+
+/* Brd2/Brd4 specific HW resources */
+#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */
+
+
+/* Brd3, Brd4 common HW resources */
+#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder
+					    * templates
+					    */
+
+/* Brd4 specific HW resources */
+#define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */
+#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */
+#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */
+#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */
+#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking
+					    * States
+					    */
+#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */
+#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */
+#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */
+
+/*
+ * Common for the Reserved Resource defines below:
+ *
+ * - HW Resources
+ *   For resources where a priority level plays a role, i.e. l2 ctx
+ *   tcam entries, both a number of resources and a begin/end pair is
+ *   required. The begin/end is used to assure TFLIB gets the correct
+ *   priority setting for that resource.
+ *
+ *   For EM records there is no priority required thus a number of
+ *   resources is sufficient.
+ *
+ *   Example, TCAM:
+ *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry
+ *     0-63 as HW presents 0 as the highest priority entry.
+ *
+ * - SRAM Resources
+ *   Handled as regular resources as there is no priority required.
+ *
+ * Common for these resources is that they are handled per direction,
+ * rx/tx.
+ */
+
+/* HW Resources */
+
+/* L2 CTX */
+#define TF_RSVD_L2_CTXT_TCAM_RX                   64
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)
+#define TF_RSVD_L2_CTXT_TCAM_TX                   960
+#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0
+#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)
+
+/* Profiler */
+#define TF_RSVD_PROF_FUNC_RX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_RX              127
+#define TF_RSVD_PROF_FUNC_TX                      64
+#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64
+#define TF_RSVD_PROF_FUNC_END_IDX_TX              127
+
+#define TF_RSVD_PROF_TCAM_RX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023
+#define TF_RSVD_PROF_TCAM_TX                      64
+#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960
+#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023
+
+/* EM Profiles IDs */
+#define TF_RSVD_EM_PROF_ID_RX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */
+#define TF_RSVD_EM_PROF_ID_TX                     64
+#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0
+#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */
+
+/* EM Records */
+#define TF_RSVD_EM_REC_RX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0
+#define TF_RSVD_EM_REC_TX                         16000
+#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0
+
+/* Wildcard */
+#define TF_RSVD_WC_TCAM_PROF_ID_RX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255
+#define TF_RSVD_WC_TCAM_PROF_ID_TX                128
+#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128
+#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255
+
+#define TF_RSVD_WC_TCAM_RX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_WC_TCAM_END_IDX_RX                63
+#define TF_RSVD_WC_TCAM_TX                        64
+#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_WC_TCAM_END_IDX_TX                63
+
+#define TF_RSVD_METER_PROF_RX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_PROF_END_IDX_RX             0
+#define TF_RSVD_METER_PROF_TX                     0
+#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_PROF_END_IDX_TX             0
+
+#define TF_RSVD_METER_INST_RX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0
+#define TF_RSVD_METER_INST_END_IDX_RX             0
+#define TF_RSVD_METER_INST_TX                     0
+#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0
+#define TF_RSVD_METER_INST_END_IDX_TX             0
+
+/* Mirror */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_MIRROR_RX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0
+#define TF_RSVD_MIRROR_END_IDX_RX                 0
+#define TF_RSVD_MIRROR_TX                         0
+#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0
+#define TF_RSVD_MIRROR_END_IDX_TX                 0
+
+/* UPAR */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_UPAR_RX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0
+#define TF_RSVD_UPAR_END_IDX_RX                   0
+#define TF_RSVD_UPAR_TX                           0
+#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0
+#define TF_RSVD_UPAR_END_IDX_TX                   0
+
+/* Source Properties */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SP_TCAM_RX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0
+#define TF_RSVD_SP_TCAM_END_IDX_RX                0
+#define TF_RSVD_SP_TCAM_TX                        0
+#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0
+#define TF_RSVD_SP_TCAM_END_IDX_TX                0
+
+/* L2 Func */
+#define TF_RSVD_L2_FUNC_RX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0
+#define TF_RSVD_L2_FUNC_END_IDX_RX                0
+#define TF_RSVD_L2_FUNC_TX                        0
+#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0
+#define TF_RSVD_L2_FUNC_END_IDX_TX                0
+
+/* FKB */
+#define TF_RSVD_FKB_RX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_RX                  0
+#define TF_RSVD_FKB_END_IDX_RX                    0
+#define TF_RSVD_FKB_TX                            0
+#define TF_RSVD_FKB_BEGIN_IDX_TX                  0
+#define TF_RSVD_FKB_END_IDX_TX                    0
+
+/* TBL Scope */
+#define TF_RSVD_TBL_SCOPE_RX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1
+#define TF_RSVD_TBL_SCOPE_TX                      1
+#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0
+#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1
+
+/* EPOCH0 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH0_RX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH0_END_IDX_RX                 0
+#define TF_RSVD_EPOCH0_TX                         0
+#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH0_END_IDX_TX                 0
+
+/* EPOCH1 */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_EPOCH1_RX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0
+#define TF_RSVD_EPOCH1_END_IDX_RX                 0
+#define TF_RSVD_EPOCH1_TX                         0
+#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0
+#define TF_RSVD_EPOCH1_END_IDX_TX                 0
+
+/* METADATA */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_METADATA_RX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_RX             0
+#define TF_RSVD_METADATA_END_IDX_RX               0
+#define TF_RSVD_METADATA_TX                       0
+#define TF_RSVD_METADATA_BEGIN_IDX_TX             0
+#define TF_RSVD_METADATA_END_IDX_TX               0
+
+/* CT_STATE */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_CT_STATE_RX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0
+#define TF_RSVD_CT_STATE_END_IDX_RX               0
+#define TF_RSVD_CT_STATE_TX                       0
+#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0
+#define TF_RSVD_CT_STATE_END_IDX_TX               0
+
+/* RANGE_PROF */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_PROF_RX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_RX             0
+#define TF_RSVD_RANGE_PROF_TX                     0
+#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0
+#define TF_RSVD_RANGE_PROF_END_IDX_TX             0
+
+/* RANGE_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_RANGE_ENTRY_RX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0
+#define TF_RSVD_RANGE_ENTRY_TX                    0
+#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0
+#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0
+
+/* LAG_ENTRY */
+/* Not yet supported fully in the infra */
+#define TF_RSVD_LAG_ENTRY_RX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0
+#define TF_RSVD_LAG_ENTRY_TX                      0
+#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0
+#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0
+
+
+/* SRAM - Resources
+ * Limited to the types that CFA provides.
+ */
+#define TF_RSVD_SRAM_FULL_ACTION_RX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_FULL_ACTION_TX               8001
+#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0
+
+/* Not yet supported fully in the infra */
+#define TF_RSVD_SRAM_MCG_RX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0
+/* Multicast Group on TX is not supported */
+#define TF_RSVD_SRAM_MCG_TX                       0
+#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0
+
+/* First encap of 8B RX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_RX                  32
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0
+/* First encap of 8B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_8B_TX                  0
+#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0
+
+#define TF_RSVD_SRAM_ENCAP_16B_RX                 16
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0
+/* First encap of 16B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_16B_TX                 20
+#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0
+
+/* Encap of 64B on RX is not supported */
+#define TF_RSVD_SRAM_ENCAP_64B_RX                 0
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0
+/* First encap of 64B TX is reserved by CFA */
+#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007
+#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_SP_SMAC_RX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0
+#define TF_RSVD_SRAM_SP_SMAC_TX                   0
+#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0
+
+/* SRAM SP IPV4 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511
+#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0
+
+/* SRAM SP IPV6 on RX is not supported */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0
+/* Not yet supported fully in infra */
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0
+#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0
+
+#define TF_RSVD_SRAM_COUNTER_64B_RX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0
+#define TF_RSVD_SRAM_COUNTER_64B_TX               160
+#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0
+
+#define TF_RSVD_SRAM_NAT_SPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_SPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_DPORT_RX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0
+#define TF_RSVD_SRAM_NAT_DPORT_TX                 0
+#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0
+
+#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0
+
+#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0
+#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0
+#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0
+
+/* HW Resource Pool names */
+
+#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool
+#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx
+#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx
+
+#define TF_PROF_FUNC_POOL_NAME            prof_func_pool
+#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx
+#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx
+
+#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool
+#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx
+#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx
+
+#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool
+#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx
+#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx
+
+#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx
+#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx
+
+#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool
+#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx
+#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx
+
+#define TF_METER_PROF_POOL_NAME           meter_prof_pool
+#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx
+#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx
+
+#define TF_METER_INST_POOL_NAME           meter_inst_pool
+#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx
+#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx
+
+#define TF_MIRROR_POOL_NAME               mirror_pool
+#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx
+#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx
+
+#define TF_UPAR_POOL_NAME                 upar_pool
+#define TF_UPAR_POOL_NAME_RX              upar_pool_rx
+#define TF_UPAR_POOL_NAME_TX              upar_pool_tx
+
+#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool
+#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx
+#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx
+
+#define TF_FKB_POOL_NAME                  fkb_pool
+#define TF_FKB_POOL_NAME_RX               fkb_pool_rx
+#define TF_FKB_POOL_NAME_TX               fkb_pool_tx
+
+#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool
+#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx
+#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx
+
+#define TF_L2_FUNC_POOL_NAME              l2_func_pool
+#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx
+#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx
+
+#define TF_EPOCH0_POOL_NAME               epoch0_pool
+#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx
+#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx
+
+#define TF_EPOCH1_POOL_NAME               epoch1_pool
+#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx
+#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx
+
+#define TF_METADATA_POOL_NAME             metadata_pool
+#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx
+#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx
+
+#define TF_CT_STATE_POOL_NAME             ct_state_pool
+#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx
+#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx
+
+#define TF_RANGE_PROF_POOL_NAME           range_prof_pool
+#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx
+#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx
+
+#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool
+#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx
+#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx
+
+#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool
+#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx
+#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx
+
+/* SRAM Resource Pool names */
+#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool
+#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx
+#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx
+
+#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool
+#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx
+#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx
+
+#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool
+#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx
+#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx
+
+#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool
+#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx
+#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx
+
+#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool
+#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx
+#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx
+
+#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool
+#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx
+#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx
+#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx
+
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx
+#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx
+
+#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool
+#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx
+#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx
+
+#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool
+#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx
+#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx
+
+#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool
+#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx
+#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx
+
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx
+#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx
+
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx
+#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx
+
+/* Sw Resource Pool Names */
+
+#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool
+#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx
+#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx
+
+
 /** HW Resource types
  */
 enum tf_resource_type_hw {
@@ -57,4 +538,5 @@ enum tf_resource_type_sram {
 	TF_RESC_TYPE_SRAM_NAT_D_IPV4,
 	TF_RESC_TYPE_SRAM_MAX
 };
+
 #endif /* _TF_RESOURCES_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 07/34] net/bnxt: add initial tf core resource mgmt support
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (5 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
                         ` (29 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow public API definitions for resources
  as well as RM infrastructure

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile             |    1 +
 drivers/net/bnxt/tf_core/tf_core.c    |   40 +
 drivers/net/bnxt/tf_core/tf_core.h    |  125 +++
 drivers/net/bnxt/tf_core/tf_rm.c      | 1731 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_rm.h      |  175 ++++
 drivers/net/bnxt/tf_core/tf_session.h |  206 +++-
 6 files changed, 2277 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index c39c098..02f8c3f 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index d82f746..7d76efa 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -20,6 +20,29 @@ static inline uint32_t SWAP_WORDS32(uint32_t val32)
 		((val32 & 0xffff0000) >> 16));
 }
 
+static void tf_seeds_init(struct tf_session *session)
+{
+	int i;
+	uint32_t r;
+
+	/* Initialize the lfsr */
+	rand_init();
+
+	/* RX and TX use the same seed values */
+	session->lkup_lkup3_init_cfg[TF_DIR_RX] =
+		session->lkup_lkup3_init_cfg[TF_DIR_TX] =
+						SWAP_WORDS32(rand32());
+
+	for (i = 0; i < TF_LKUP_SEED_MEM_SIZE / 2; i++) {
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2] = r;
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2] = r;
+		r = SWAP_WORDS32(rand32());
+		session->lkup_em_seed_mem[TF_DIR_RX][i * 2 + 1] = (r & 0x1);
+		session->lkup_em_seed_mem[TF_DIR_TX][i * 2 + 1] = (r & 0x1);
+	}
+}
+
 int
 tf_open_session(struct tf                    *tfp,
 		struct tf_open_session_parms *parms)
@@ -109,6 +132,7 @@ tf_open_session(struct tf                    *tfp,
 
 	/* Initialize Session */
 	session->device_type = parms->device_type;
+	tf_rm_init(tfp);
 
 	/* Construct the Session ID */
 	session->session_id.internal.domain = domain;
@@ -125,6 +149,16 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Adjust the Session with what firmware allowed us to get */
+	rc = tf_rm_allocate_validate(tfp);
+	if (rc) {
+		/* Log error */
+		goto cleanup_close;
+	}
+
+	/* Setup hash seeds */
+	tf_seeds_init(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
@@ -195,6 +229,12 @@ tf_close_session(struct tf *tfp)
 
 	tfs = (struct tf_session *)(tfp->session->core_data);
 
+	/* Cleanup if we're last user of the session */
+	if (tfs->ref_count == 1) {
+		/* Cleanup any outstanding resources */
+		rc_close = tf_rm_close(tfp);
+	}
+
 	if (tfs->session_id.id != TF_SESSION_ID_INVALID) {
 		rc = tf_msg_session_close(tfp);
 		if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 69433ac..3455d8f 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -344,4 +344,129 @@ int tf_attach_session(struct tf *tfp,
  */
 int tf_close_session(struct tf *tfp);
 
+/**
+ * @page  ident Identity Management
+ *
+ * @ref tf_alloc_identifier
+ *
+ * @ref tf_free_identifier
+ */
+enum tf_identifier_type {
+	/** The L2 Context is returned from the L2 Ctxt TCAM lookup
+	 *  and can be used in WC TCAM or EM keys to virtualize further
+	 *  lookups.
+	 */
+	TF_IDENT_TYPE_L2_CTXT,
+	/** The WC profile func is returned from the L2 Ctxt TCAM lookup
+	 *  to enable virtualization of the profile TCAM.
+	 */
+	TF_IDENT_TYPE_PROF_FUNC,
+	/** The WC profile ID is included in the WC lookup key
+	 *  to enable virtualization of the WC TCAM hardware.
+	 */
+	TF_IDENT_TYPE_WC_PROF,
+	/** The EM profile ID is included in the EM lookup key
+	 *  to enable virtualization of the EM hardware. (not required for Brd4
+	 *  as it has table scope)
+	 */
+	TF_IDENT_TYPE_EM_PROF,
+	/** The L2 func is included in the ILT result and from recycling to
+	 *  enable virtualization of further lookups.
+	 */
+	TF_IDENT_TYPE_L2_FUNC
+};
+
+/**
+ * TCAM table type
+ */
+enum tf_tcam_tbl_type {
+	TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	TF_TCAM_TBL_TYPE_PROF_TCAM,
+	TF_TCAM_TBL_TYPE_WC_TCAM,
+	TF_TCAM_TBL_TYPE_SP_TCAM,
+	TF_TCAM_TBL_TYPE_CT_RULE_TCAM,
+	TF_TCAM_TBL_TYPE_VEB_TCAM,
+	TF_TCAM_TBL_TYPE_MAX
+
+};
+
+/**
+ * Enumeration of TruFlow table types. A table type is used to identify a
+ * resource object.
+ *
+ * NOTE: The table type TF_TBL_TYPE_EXT is unique in that it is
+ * the only table type that is connected with a table scope.
+ */
+enum tf_tbl_type {
+	/** Wh+/Brd2 Action Record */
+	TF_TBL_TYPE_FULL_ACT_RECORD,
+	/** Multicast Groups */
+	TF_TBL_TYPE_MCAST_GROUPS,
+	/** Action Encap 8 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_8B,
+	/** Action Encap 16 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_16B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_32B,
+	/** Action Encap 64 Bytes */
+	TF_TBL_TYPE_ACT_ENCAP_64B,
+	/** Action Source Properties SMAC */
+	TF_TBL_TYPE_ACT_SP_SMAC,
+	/** Action Source Properties SMAC IPv4 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
+	/** Action Source Properties SMAC IPv6 */
+	TF_TBL_TYPE_ACT_SP_SMAC_IPV6,
+	/** Action Statistics 64 Bits */
+	TF_TBL_TYPE_ACT_STATS_64,
+	/** Action Modify L4 Src Port */
+	TF_TBL_TYPE_ACT_MODIFY_SPORT,
+	/** Action Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_DPORT,
+	/** Action Modify IPv4 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC,
+	/** Action _Modify L4 Dest Port */
+	TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST,
+	/** Action Modify IPv6 Source */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC,
+	/** Action Modify IPv6 Destination */
+	TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST,
+
+	/* HW */
+
+	/** Meter Profiles */
+	TF_TBL_TYPE_METER_PROF,
+	/** Meter Instance */
+	TF_TBL_TYPE_METER_INST,
+	/** Mirror Config */
+	TF_TBL_TYPE_MIRROR_CONFIG,
+	/** UPAR */
+	TF_TBL_TYPE_UPAR,
+	/** Brd4 Epoch 0 table */
+	TF_TBL_TYPE_EPOCH0,
+	/** Brd4 Epoch 1 table  */
+	TF_TBL_TYPE_EPOCH1,
+	/** Brd4 Metadata  */
+	TF_TBL_TYPE_METADATA,
+	/** Brd4 CT State  */
+	TF_TBL_TYPE_CT_STATE,
+	/** Brd4 Range Profile  */
+	TF_TBL_TYPE_RANGE_PROF,
+	/** Brd4 Range Entry  */
+	TF_TBL_TYPE_RANGE_ENTRY,
+	/** Brd4 LAG Entry  */
+	TF_TBL_TYPE_LAG,
+	/** Brd4 only VNIC/SVIF Table */
+	TF_TBL_TYPE_VNIC_SVIF,
+
+	/* External */
+
+	/** External table type - initially 1 poolsize entries.
+	 * All External table types are associated with a table
+	 * scope. Internal types are not.
+	 */
+	TF_TBL_TYPE_EXT,
+	/** Future - external pool of size0 entries */
+	TF_TBL_TYPE_EXT_0,
+	TF_TBL_TYPE_MAX
+};
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
new file mode 100644
index 0000000..56767e7
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -0,0 +1,1731 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+
+#include <rte_common.h>
+
+#include "tf_rm.h"
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_resources.h"
+#include "tf_msg.h"
+#include "bnxt.h"
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result
+ *   enum tf_dir               dir         - Direction to process
+ *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element
+ *                                           in the hw query structure
+ *   define                    def_value   - Define value to check against
+ *   uint32_t                 *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \
+	if ((dir) == TF_DIR_RX) {					      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	} else {							      \
+		if ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \
+			*(eflag) |= 1 << (hcapi_type);			      \
+	}								      \
+} while (0)
+
+/**
+ * Internal macro to perform HW resource allocation check between what
+ * firmware reports vs what was statically requested.
+ *
+ * Parameters:
+ *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result
+ *   enum tf_dir                dir         - Direction to process
+ *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element
+ *                                            in the hw query structure
+ *   define                     def_value   - Define value to check against
+ *   uint32_t                  *eflag       - Result of the check
+ */
+#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \
+	if ((dir) == TF_DIR_RX) {					       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	} else {							       \
+		if ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\
+			*(eflag) |= 1 << (hcapi_type);			       \
+	}								       \
+} while (0)
+
+/**
+ * Internal macro to convert a reserved resource define name to be
+ * direction specific.
+ *
+ * Parameters:
+ *   enum tf_dir    dir         - Direction to process
+ *   string         type        - Type name to append RX or TX to
+ *   string         dtype       - Direction specific type
+ *
+ *
+ */
+#define TF_RESC_RSVD(dir, type, dtype) do {	\
+		if ((dir) == TF_DIR_RX)		\
+			(dtype) = type ## _RX;	\
+		else				\
+			(dtype) = type ## _TX;	\
+	} while (0)
+
+const char
+*tf_dir_2_str(enum tf_dir dir)
+{
+	switch (dir) {
+	case TF_DIR_RX:
+		return "RX";
+	case TF_DIR_TX:
+		return "TX";
+	default:
+		return "Invalid direction";
+	}
+}
+
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type)
+{
+	switch (id_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		return "l2_ctxt_remap";
+	case TF_IDENT_TYPE_PROF_FUNC:
+		return "prof_func";
+	case TF_IDENT_TYPE_WC_PROF:
+		return "wc_prof";
+	case TF_IDENT_TYPE_EM_PROF:
+		return "em_prof";
+	case TF_IDENT_TYPE_L2_FUNC:
+		return "l2_func";
+	default:
+		break;
+	}
+	return "Invalid identifier";
+}
+
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)
+{
+	switch (sram_type) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		return "Full action";
+	case TF_RESC_TYPE_SRAM_MCG:
+		return "MCG";
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		return "Encap 8B";
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		return "Encap 16B";
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		return "Encap 64B";
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		return "Source properties SMAC";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		return "Source properties SMAC IPv4";
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		return "Source properties IPv6";
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		return "Counter 64B";
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		return "NAT source port";
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		return "NAT destination port";
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		return "NAT source IPv4";
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		return "NAT destination IPv4";
+	default:
+		return "Invalid identifier";
+	}
+}
+
+/**
+ * Helper function to perform a SRAM HCAPI resource type lookup
+ * against the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_SRAM_FULL_ACTION:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);
+		break;
+	case TF_RESC_TYPE_SRAM_MCG:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_8B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_16B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_ENCAP_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);
+		break;
+	case TF_RESC_TYPE_SRAM_COUNTER_64B:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_SPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_DPORT:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_S_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);
+		break;
+	case TF_RESC_TYPE_SRAM_NAT_D_IPV4:
+		TF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
+ * Helper function to print all the SRAM resource qcaps errors
+ * reported in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the sram error flags created at time of the query check
+ */
+static void
+tf_rm_print_sram_qcaps_error(enum tf_dir dir,
+			     struct tf_rm_sram_query *sram_query,
+			     uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors SRAM\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_sram_2_str(i),
+				    sram_query->sram_query[i].max,
+				    tf_rm_rsvd_sram_value(dir, i));
+	}
+}
+
+/**
+ * Performs a HW resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to HW Query data structure. Query holds what the firmware
+ *   offers of the HW resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
+			    enum tf_dir dir,
+			    uint32_t *error_flag)
+{
+	*error_flag = 0;
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_ENTRY,
+			     TF_RSVD_RANGE_ENTRY,
+			     error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Performs a SRAM resource check between what firmware capability
+ * reports and what the core expects is available.
+ *
+ * Firmware performs the resource carving at AFM init time and the
+ * resource capability is reported in the TruFlow qcaps msg.
+ *
+ * [in] query
+ *   Pointer to SRAM Query data structure. Query holds what the
+ *   firmware offers of the SRAM resources.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in/out] error_flag
+ *   Pointer to a bit array indicating the error of a single HCAPI
+ *   resource type. When a bit is set to 1, the HCAPI resource type
+ *   failed static allocation.
+ *
+ * Returns:
+ *  0       - Success
+ *  -ENOMEM - Failure on one of the allocated resources. Check the
+ *            error_flag for what types are flagged errored.
+ */
+static int
+tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,
+			      enum tf_dir dir,
+			      uint32_t *error_flag)
+{
+	*error_flag = 0;
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_FULL_ACTION,
+			       TF_RSVD_SRAM_FULL_ACTION,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_MCG,
+			       TF_RSVD_SRAM_MCG,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_8B,
+			       TF_RSVD_SRAM_ENCAP_8B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_16B,
+			       TF_RSVD_SRAM_ENCAP_16B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_ENCAP_64B,
+			       TF_RSVD_SRAM_ENCAP_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC,
+			       TF_RSVD_SRAM_SP_SMAC,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,
+			       TF_RSVD_SRAM_SP_SMAC_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,
+			       TF_RSVD_SRAM_SP_SMAC_IPV6,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_COUNTER_64B,
+			       TF_RSVD_SRAM_COUNTER_64B,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_SPORT,
+			       TF_RSVD_SRAM_NAT_SPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_DPORT,
+			       TF_RSVD_SRAM_NAT_DPORT,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_S_IPV4,
+			       TF_RSVD_SRAM_NAT_S_IPV4,
+			       error_flag);
+
+	TF_RM_CHECK_SRAM_ALLOC(query,
+			       dir,
+			       TF_RESC_TYPE_SRAM_NAT_D_IPV4,
+			       TF_RSVD_SRAM_NAT_D_IPV4,
+			       error_flag);
+
+	if (*error_flag != 0)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * Internal function to mark pool entries used.
+ */
+static void
+tf_rm_reserve_range(uint32_t count,
+		    uint32_t rsv_begin,
+		    uint32_t rsv_end,
+		    uint32_t max,
+		    struct bitalloc *pool)
+{
+	uint32_t i;
+
+	/* If no resources has been requested we mark everything
+	 * 'used'
+	 */
+	if (count == 0)	{
+		for (i = 0; i < max; i++)
+			ba_alloc_index(pool, i);
+	} else {
+		/* Support 2 main modes
+		 * Reserved range starts from bottom up (with
+		 * pre-reserved value or not)
+		 * - begin = 0 to end xx
+		 * - begin = 1 to end xx
+		 *
+		 * Reserved range starts from top down
+		 * - begin = yy to end max
+		 */
+
+		/* Bottom up check, start from 0 */
+		if (rsv_begin == 0) {
+			for (i = rsv_end + 1; i < max; i++)
+				ba_alloc_index(pool, i);
+		}
+
+		/* Bottom up check, start from 1 or higher OR
+		 * Top Down
+		 */
+		if (rsv_begin >= 1) {
+			/* Allocate from 0 until start */
+			for (i = 0; i < rsv_begin; i++)
+				ba_alloc_index(pool, i);
+
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the full action resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_full_action(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;
+	uint16_t end = 0;
+
+	/* full action rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_RX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);
+
+	/* full action tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_FULL_ACTION_TX,
+			    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the multicast group resources
+ * allocated that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_mcg(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_MCG;
+	uint16_t end = 0;
+
+	/* multicast group rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_MCG_RX,
+			    tfs->TF_SRAM_MCG_POOL_NAME_RX);
+
+	/* Multicast Group on TX is not supported */
+}
+
+/**
+ * Internal function to mark all the encap resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_encap(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;
+	uint16_t end = 0;
+
+	/* encap 8b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_RX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);
+
+	/* encap 8b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_8B_TX,
+			    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_16B;
+
+	/* encap 16b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_RX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);
+
+	/* encap 16b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_16B_TX,
+			    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_ENCAP_64B;
+
+	/* Encap 64B not supported on RX */
+
+	/* Encap 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_ENCAP_64B_TX,
+			    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_sp(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;
+	uint16_t end = 0;
+
+	/* sp smac rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_RX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);
+
+	/* sp smac tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_TX,
+			    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+
+	/* SP SMAC IPv4 not supported on RX */
+
+	/* sp smac ipv4 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+
+	/* SP SMAC IPv6 not supported on RX */
+
+	/* sp smac ipv6 tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,
+			    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the stat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_stats(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;
+	uint16_t end = 0;
+
+	/* counter 64b rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_RX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);
+
+	/* counter 64b tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_COUNTER_64B_TX,
+			    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the nat resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sram_nat(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;
+	uint16_t end = 0;
+
+	/* nat source port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_RX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);
+
+	/* nat source port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_SPORT_TX,
+			    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_DPORT;
+
+	/* nat destination port rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_RX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);
+
+	/* nat destination port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_DPORT_TX,
+			    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+
+	/* nat source port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_RX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);
+
+	/* nat source ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_S_IPV4_TX,
+			    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+
+	/* nat destination port ipv4 rx direction */
+	if (tfs->resc.rx.sram_entry[index].stride > 0)
+		end = tfs->resc.rx.sram_entry[index].start +
+			tfs->resc.rx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_RX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);
+
+	/* nat destination ipv4 port tx direction */
+	if (tfs->resc.tx.sram_entry[index].stride > 0)
+		end = tfs->resc.tx.sram_entry[index].start +
+			tfs->resc.tx.sram_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,
+			    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,
+			    end,
+			    TF_RSVD_SRAM_NAT_D_IPV4_TX,
+			    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);
+}
+
+/**
+ * Internal function used to validate the HW allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_hw_alloc_validate(enum tf_dir dir,
+			struct tf_rm_hw_alloc *hw_alloc,
+			struct tf_rm_entry *hw_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entry[i].stride != hw_alloc->hw_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed id:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				hw_alloc->hw_num[i],
+				hw_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to validate the SRAM allocated resources
+ * against the requested values.
+ */
+static int
+tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,
+			  struct tf_rm_sram_alloc *sram_alloc,
+			  struct tf_rm_entry *sram_entry)
+{
+	int error = 0;
+	int i;
+
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entry[i].stride != sram_alloc->sram_num[i]) {
+			PMD_DRV_LOG(ERR,
+				"%s, Alloc failed idx:%d expect:%d got:%d\n",
+				tf_dir_2_str(dir),
+				i,
+				sram_alloc->sram_num[i],
+				sram_entry[i].stride);
+			error = -1;
+		}
+	}
+
+	return error;
+}
+
+/**
+ * Internal function used to mark all the HW resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_reserve_hw(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_l2_func(tfs);
+}
+
+/**
+ * Internal function used to mark all the SRAM resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_reserve_sram(struct tf *tfp)
+{
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* TBD
+	 * There is no direct AFM resource allocation as it is carved
+	 * statically at AFM boot time. Thus the bit allocators work
+	 * on the full HW resource amount and we just mark everything
+	 * used except the resources that Truflow took ownership off.
+	 */
+	tf_rm_rsvd_sram_full_action(tfs);
+	tf_rm_rsvd_sram_mcg(tfs);
+	tf_rm_rsvd_sram_encap(tfs);
+	tf_rm_rsvd_sram_sp(tfs);
+	tf_rm_rsvd_sram_stats(tfs);
+	tf_rm_rsvd_sram_nat(tfs);
+}
+
+/**
+ * Internal function used to allocate and validate all HW resources.
+ */
+static int
+tf_rm_allocate_validate_hw(struct tf *tfp,
+			   enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_hw_query hw_query;
+	struct tf_rm_hw_alloc hw_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *hw_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		hw_entries = tfs->resc.rx.hw_entry;
+	else
+		hw_entries = tfs->resc.tx.hw_entry;
+
+	/* Query for Session HW Resources */
+	rc = tf_msg_session_hw_resc_qcaps(tfp, dir, &hw_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		goto cleanup;
+	}
+
+	/* Post process HW capability */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)
+		hw_alloc.hw_num[i] = hw_query.hw_query[i].max;
+
+	/* Allocate Session HW Resources */
+	rc = tf_msg_session_hw_resc_alloc(tfp, dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform HW allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, HW Resource validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Internal function used to allocate and validate all SRAM resources.
+ *
+ * [in] tfp
+ *   Pointer to TF handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0  - Success
+ *   -1 - Internal error
+ */
+static int
+tf_rm_allocate_validate_sram(struct tf *tfp,
+			     enum tf_dir dir)
+{
+	int rc;
+	int i;
+	struct tf_rm_sram_query sram_query;
+	struct tf_rm_sram_alloc sram_alloc;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+	struct tf_rm_entry *sram_entries;
+	uint32_t error_flag;
+
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a SRAM resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master SRAM Resource data base
+ *
+ * [in/out] flush_entries
+ *   Pruned SRAM Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master SRAM
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_sram_to_flush(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    struct tf_rm_entry *sram_entries,
+		    struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the sram resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_FULL_ACTION_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for RX direction */
+	if (dir == TF_DIR_RX) {
+		TF_RM_GET_POOLS_RX(tfs, &pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune TX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_8B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_ENCAP_16B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_SP_SMAC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;
+	}
+
+	/* Only pools for TX direction */
+	if (dir == TF_DIR_TX) {
+		TF_RM_GET_POOLS_TX(tfs, &pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		if (rc)
+			return rc;
+		free_cnt = ba_free_count(pool);
+		if (free_cnt ==
+		    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+			flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =
+				0;
+		} else {
+			flush_rc = 1;
+		}
+	} else {
+		/* Always prune RX direction */
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_STATS_64B_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_SPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_DPORT_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_S_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SRAM_NAT_D_IPV4_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;
+		flush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
+}
+
+/**
+ * Helper function used to generate an error log for the SRAM types
+ * that needs to be flushed. The types should have been cleaned up
+ * ahead of invoking tf_close_session.
+ *
+ * [in] sram_entries
+ *   SRAM Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_sram_flush(enum tf_dir dir,
+		     struct tf_rm_entry *sram_entries)
+{
+	int i;
+
+	/* Walk the sram flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {
+		if (sram_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_sram_2_str(i));
+	}
+}
+
+void
+tf_rm_init(struct tf *tfp __rte_unused)
+{
+	struct tf_session *tfs =
+		(struct tf_session *)(tfp->session->core_data);
+
+	/* This version is host specific and should be checked against
+	 * when attaching as there is no guarantee that a secondary
+	 * would run from same image version.
+	 */
+	tfs->ver.major = TF_SESSION_VER_MAJOR;
+	tfs->ver.minor = TF_SESSION_VER_MINOR;
+	tfs->ver.update = TF_SESSION_VER_UPDATE;
+
+	tfs->session_id.id = 0;
+	tfs->ref_count = 0;
+
+	/* Initialization of Table Scopes */
+	/* ll_init(&tfs->tbl_scope_ll); */
+
+	/* Initialization of HW and SRAM resource DB */
+	memset(&tfs->resc, 0, sizeof(struct tf_rm_db));
+
+	/* Initialization of HW Resource Pools */
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/* Initialization of SRAM Resource Pools
+	 * These pools are set to the TFLIB defined MAX sizes not
+	 * AFM's HW max as to limit the memory consumption
+	 */
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		TF_RSVD_SRAM_FULL_ACTION_RX);
+	ba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		TF_RSVD_SRAM_FULL_ACTION_TX);
+	/* Only Multicast Group on RX is supported */
+	ba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,
+		TF_RSVD_SRAM_MCG_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_8B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_8B_TX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		TF_RSVD_SRAM_ENCAP_16B_RX);
+	ba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_16B_TX);
+	/* Only Encap 64B on TX is supported */
+	ba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_ENCAP_64B_TX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		TF_RSVD_SRAM_SP_SMAC_RX);
+	ba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_TX);
+	/* Only SP SMAC IPv4 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+	/* Only SP SMAC IPv6 on TX is supported */
+	ba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,
+		TF_RSVD_SRAM_COUNTER_64B_RX);
+	ba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,
+		TF_RSVD_SRAM_COUNTER_64B_TX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_SPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_SPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_DPORT_RX);
+	ba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_DPORT_TX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_S_IPV4_TX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	ba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/* Initialization of pools local to TF Core */
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+}
+
+int
+tf_rm_allocate_validate(struct tf *tfp)
+{
+	int rc;
+	int i;
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		rc = tf_rm_allocate_validate_hw(tfp, i);
+		if (rc)
+			return rc;
+		rc = tf_rm_allocate_validate_sram(tfp, i);
+		if (rc)
+			return rc;
+	}
+
+	/* With both HW and SRAM allocated and validated we can
+	 * 'scrub' the reservation on the pools.
+	 */
+	tf_rm_reserve_hw(tfp);
+	tf_rm_reserve_sram(tfp);
+
+	return rc;
+}
+
+int
+tf_rm_close(struct tf *tfp)
+{
+	int rc;
+	int rc_close = 0;
+	int i;
+	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *sram_entries;
+	struct tf_rm_entry *sram_flush_entries;
+	struct tf_session *tfs __rte_unused =
+		(struct tf_session *)(tfp->session->core_data);
+
+	struct tf_rm_db flush_resc = tfs->resc;
+
+	/* On close it is assumed that the session has already cleaned
+	 * up all its resources, individually, while destroying its
+	 * flows. No checking is performed thus the behavior is as
+	 * follows.
+	 *
+	 * Session RM will signal FW to release session resources. FW
+	 * will perform invalidation of all the allocated entries
+	 * (assures any outstanding resources has been cleared, then
+	 * free the FW RM instance.
+	 *
+	 * Session will then be freed by tf_close_session() thus there
+	 * is no need to clean each resource pool as the whole session
+	 * is going away.
+	 */
+
+	for (i = 0; i < TF_DIR_MAX; i++) {
+		if (i == TF_DIR_RX) {
+			hw_entries = tfs->resc.rx.hw_entry;
+			sram_entries = tfs->resc.rx.sram_entry;
+			sram_flush_entries = flush_resc.rx.sram_entry;
+		} else {
+			hw_entries = tfs->resc.tx.hw_entry;
+			sram_entries = tfs->resc.tx.sram_entry;
+			sram_flush_entries = flush_resc.tx.sram_entry;
+		}
+
+		/* Check for any not previously freed SRAM resources
+		 * and flush if required.
+		 */
+		rc = tf_rm_sram_to_flush(tfs,
+					 i,
+					 sram_entries,
+					 sram_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering SRAM resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_sram_flush(i, sram_flush_entries);
+
+			rc = tf_msg_session_sram_resc_flush(tfp,
+							    i,
+							    sram_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
+		rc = tf_msg_session_hw_resc_free(tfp, i, hw_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, HW free failed\n",
+				    tf_dir_2_str(i));
+		}
+
+		rc = tf_msg_session_sram_resc_free(tfp, i, sram_entries);
+		if (rc) {
+			rc_close = rc;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, SRAM free failed\n",
+				    tf_dir_2_str(i));
+		}
+	}
+
+	return rc_close;
+}
+
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type)
+{
+	int rc = 0;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		*hcapi_type = TF_RESC_TYPE_SRAM_MCG;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		*hcapi_type = TF_RESC_TYPE_HW_METER_INST;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		*hcapi_type = TF_RESC_TYPE_HW_MIRROR;
+		break;
+	case TF_TBL_TYPE_UPAR:
+		*hcapi_type = TF_RESC_TYPE_HW_UPAR;
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;
+		break;
+	case TF_TBL_TYPE_METADATA:
+		*hcapi_type = TF_RESC_TYPE_HW_METADATA;
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;
+		break;
+	case TF_TBL_TYPE_LAG:
+		*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		*hcapi_type = -1;
+		rc = -EOPNOTSUPP;
+	}
+
+	return rc;
+}
+
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index)
+{
+	int rc;
+	struct tf_rm_resc *resc;
+	uint32_t hcapi_type;
+	uint32_t base_index;
+
+	if (dir == TF_DIR_RX)
+		resc = &tfs->resc.rx;
+	else if (dir == TF_DIR_TX)
+		resc = &tfs->resc.tx;
+	else
+		return -EOPNOTSUPP;
+
+	rc = tf_rm_convert_tbl_type(type, &hcapi_type);
+	if (rc)
+		return -1;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+	case TF_TBL_TYPE_MCAST_GROUPS:
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+	case TF_TBL_TYPE_ACT_STATS_64:
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		base_index = resc->sram_entry[hcapi_type].start;
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+	case TF_TBL_TYPE_METER_PROF:
+	case TF_TBL_TYPE_METER_INST:
+	case TF_TBL_TYPE_UPAR:
+	case TF_TBL_TYPE_EPOCH0:
+	case TF_TBL_TYPE_EPOCH1:
+	case TF_TBL_TYPE_METADATA:
+	case TF_TBL_TYPE_CT_STATE:
+	case TF_TBL_TYPE_RANGE_PROF:
+	case TF_TBL_TYPE_RANGE_ENTRY:
+	case TF_TBL_TYPE_LAG:
+		base_index = resc->hw_entry[hcapi_type].start;
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_VNIC_SVIF:
+	case TF_TBL_TYPE_EXT:   /* No pools for this type */
+	case TF_TBL_TYPE_EXT_0: /* No pools for this type */
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	switch (c_type) {
+	case TF_RM_CONVERT_RM_BASE:
+		*convert_index = index - base_index;
+		break;
+	case TF_RM_CONVERT_ADD_BASE:
+		*convert_index = index + base_index;
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h
index 57ce19b..e69d443 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.h
+++ b/drivers/net/bnxt/tf_core/tf_rm.h
@@ -107,6 +107,54 @@ struct tf_rm_sram_alloc {
 };
 
 /**
+ * Resource Manager arrays for a single direction
+ */
+struct tf_rm_resc {
+	/** array of HW resource entries */
+	struct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];
+	/** array of SRAM resource entries */
+	struct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];
+};
+
+/**
+ * Resource Manager Database
+ */
+struct tf_rm_db {
+	struct tf_rm_resc rx;
+	struct tf_rm_resc tx;
+};
+
+/**
+ * Helper function converting direction to text string
+ */
+const char
+*tf_dir_2_str(enum tf_dir dir);
+
+/**
+ * Helper function converting identifier to text string
+ */
+const char
+*tf_ident_2_str(enum tf_identifier_type id_type);
+
+/**
+ * Helper function converting tcam type to text string
+ */
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type);
+
+/**
+ * Helper function used to convert HW HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);
+
+/**
+ * Helper function used to convert SRAM HCAPI resource type to a string.
+ */
+const char
+*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);
+
+/**
  * Initializes the Resource Manager and the associated database
  * entries for HW and SRAM resources. Must be called before any other
  * Resource Manager functions.
@@ -143,4 +191,131 @@ int tf_rm_allocate_validate(struct tf *tfp);
  *   - (-ENOTEMPTY) if resources are not cleaned up before close
  */
 int tf_rm_close(struct tf *tfp);
+
+#if (TF_SHADOW == 1)
+/**
+ * Initializes Shadow DB of configuration elements
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * Returns:
+ *  0  - Success
+ */
+int tf_rm_shadow_db_init(struct tf_session *tfs);
+#endif /* TF_SHADOW */
+
+/**
+ * Perform a Session Pool lookup using the Tcam table type.
+ *
+ * Function will print error msg if tcam type is unsupported or lookup
+ * failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool);
+
+/**
+ * Perform a Session Pool lookup using the Table type.
+ *
+ * Function will print error msg if table type is unsupported or
+ * lookup failed.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in/out]  session_pool
+ *   Session pool
+ *
+ * Returns:
+ *  0           - Success will set the **pool
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool);
+
+/**
+ * Converts the TF Table Type to internal HCAPI_TYPE
+ *
+ * [in] type
+ *   Type to be converted
+ *
+ * [in/out] hcapi_type
+ *   Converted type
+ *
+ * Returns:
+ *  0           - Success will set the *hcapi_type
+ *  -EOPNOTSUPP - Type is not supported
+ */
+int
+tf_rm_convert_tbl_type(enum tf_tbl_type type,
+		       uint32_t *hcapi_type);
+
+/**
+ * TF RM Convert of index methods.
+ */
+enum tf_rm_convert_type {
+	/** Adds the base of the Session Pool to the index */
+	TF_RM_CONVERT_ADD_BASE,
+	/** Removes the Session Pool base from the index */
+	TF_RM_CONVERT_RM_BASE
+};
+
+/**
+ * Provides conversion of the Table Type index in relation to the
+ * Session Pool base.
+ *
+ * [in] tfs
+ *   Pointer to TF Session
+ *
+ * [in] dir
+ *    Receive or transmit direction
+ *
+ * [in] type
+ *   Type of the object
+ *
+ * [in] c_type
+ *   Type of conversion to perform
+ *
+ * [in] index
+ *   Index to be converted
+ *
+ * [in/out]  convert_index
+ *   Pointer to the converted index
+ */
+int
+tf_rm_convert_index(struct tf_session *tfs,
+		    enum tf_dir dir,
+		    enum tf_tbl_type type,
+		    enum tf_rm_convert_type c_type,
+		    uint32_t index,
+		    uint32_t *convert_index);
+
 #endif /* TF_RM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 651d3ee..34b6c41 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -76,11 +76,215 @@ struct tf_session {
 	 */
 	uint8_t ref_count;
 
+	/** Session HW and SRAM resources */
+	struct tf_rm_db resc;
+
+	/* Session HW resource pools */
+
+	/** RX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 CTXT TCAM Pool */
+	BITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
+	/** RX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	/** TX Profile Func Pool */
+	BITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+
+	/** RX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	/** TX Profile TCAM Pool */
+	BITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+
+	/** RX EM Profile ID Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	/** TX EM Key Pool */
+	BITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/** RX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	/** TX WC Profile Pool */
+	BITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records are not controlled by way of a pool */
+
+	/** RX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	/** TX WC TCAM Pool */
+	BITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+
+	/** RX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	/** TX Meter Profile Pool */
+	BITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+
+	/** RX Meter Instance Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	/** TX Meter Pool */
+	BITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+
+	/** RX Mirror Configuration Pool*/
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	/** RX Mirror Configuration Pool */
+	BITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+
+	/** RX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	/** TX UPAR Pool */
+	BITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	/** RX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	/** TX SP TCAM Pool */
+	BITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	/** RX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	/** TX FKB Pool */
+	BITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	/** RX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	/** TX Table Scope Pool */
+	BITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+
+	/** RX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	/** TX L2 Func Pool */
+	BITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+
+	/** RX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	/** TX Epoch0 Pool */
+	BITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	/** TX Epoch1 Pool */
+	BITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+
+	/** RX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	/** TX MetaData Profile Pool */
+	BITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+
+	/** RX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	/** TX Connection Tracking State Pool */
+	BITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+
+	/** RX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	/** TX Range Profile Pool */
+	BITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+
+	/** RX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	/** TX Range Pool */
+	BITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+
+	/** RX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	/** TX LAG Pool */
+	BITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
+
+	/* Session SRAM pools */
+
+	/** RX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,
+		      TF_RSVD_SRAM_FULL_ACTION_RX);
+	/** TX Full Action Record Pool */
+	BITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,
+		      TF_RSVD_SRAM_FULL_ACTION_TX);
+
+	/** RX Multicast Group Pool, only RX is supported */
+	BITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,
+		      TF_RSVD_SRAM_MCG_RX);
+
+	/** RX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_8B_RX);
+	/** TX Encap 8B Pool*/
+	BITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_8B_TX);
+
+	/** RX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_ENCAP_16B_RX);
+	/** TX Encap 16B Pool */
+	BITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_16B_TX);
+
+	/** TX Encap 64B Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_ENCAP_64B_TX);
+
+	/** RX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,
+		      TF_RSVD_SRAM_SP_SMAC_RX);
+	/** TX Source Properties SMAC Pool */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_TX);
+
+	/** TX Source Properties SMAC IPv4 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);
+
+	/** TX Source Properties SMAC IPv6 Pool, only TX is supported */
+	BITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,
+		      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);
+
+	/** RX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,
+		      TF_RSVD_SRAM_COUNTER_64B_RX);
+	/** TX Counter 64B Pool */
+	BITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,
+		      TF_RSVD_SRAM_COUNTER_64B_TX);
+
+	/** RX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_SPORT_RX);
+	/** TX NAT Source Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_SPORT_TX);
+
+	/** RX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_DPORT_RX);
+	/** TX NAT Destination Port Pool */
+	BITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_DPORT_TX);
+
+	/** RX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_RX);
+	/** TX NAT Source IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_S_IPV4_TX);
+
+	/** RX NAT Destination IPv4 Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_RX);
+	/** TX NAT IPv4 Destination Pool */
+	BITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,
+		      TF_RSVD_SRAM_NAT_D_IPV4_TX);
+
+	/**
+	 * Pools not allocated from HCAPI RM
+	 */
+
+	/** RX L2 Ctx Remap ID  Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
+	/** TX L2 Ctx Remap ID Pool */
+	BITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+
 	/** CRC32 seed table */
 #define TF_LKUP_SEED_MEM_SIZE 512
 	uint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];
+
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
-
 };
+
 #endif /* _TF_SESSION_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 08/34] net/bnxt: add resource manager functionality
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (6 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
                         ` (28 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Michael Wildt

From: Michael Wildt <michael.wildt@broadcom.com>

- Add TruFlow RM functionality for resource handling
- Update the TruFlow Resource Manager (RM) with resource
  support functions for debugging as well as resource cleanup.
- Add support for Internal and external pools.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c    |   14 +
 drivers/net/bnxt/tf_core/tf_core.h    |   26 +
 drivers/net/bnxt/tf_core/tf_rm.c      | 1718 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.h |   10 +
 drivers/net/bnxt/tf_core/tf_tbl.h     |   43 +
 5 files changed, 1735 insertions(+), 76 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 7d76efa..bb6d38b 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -149,6 +149,20 @@ tf_open_session(struct tf                    *tfp,
 		goto cleanup_close;
 	}
 
+	/* Shadow DB configuration */
+	if (parms->shadow_copy) {
+		/* Ignore shadow_copy setting */
+		session->shadow_copy = 0;/* parms->shadow_copy; */
+#if (TF_SHADOW == 1)
+		rc = tf_rm_shadow_db_init(tfs);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "Shadow DB Initialization failed\n, rc:%d",
+				    rc);
+		/* Add additional processing */
+#endif /* TF_SHADOW */
+	}
+
 	/* Adjust the Session with what firmware allowed us to get */
 	rc = tf_rm_allocate_validate(tfp);
 	if (rc) {
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 3455d8f..16c8251 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -30,6 +30,32 @@ enum tf_dir {
 	TF_DIR_MAX
 };
 
+/**
+ * External pool size
+ *
+ * Defines a single pool of external action records of
+ * fixed size.  Currently, this is an index.
+ */
+#define TF_EXT_POOL_ENTRY_SZ_BYTES 1
+
+/**
+ *  External pool entry count
+ *
+ *  Defines the number of entries in the external action pool
+ */
+#define TF_EXT_POOL_ENTRY_CNT (1 * 1024)
+
+/**
+ * Number of external pools
+ */
+#define TF_EXT_POOL_CNT_MAX 1
+
+/**
+ * External pool Id
+ */
+#define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
+#define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
diff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c
index 56767e7..a5e96f29 100644
--- a/drivers/net/bnxt/tf_core/tf_rm.c
+++ b/drivers/net/bnxt/tf_core/tf_rm.c
@@ -104,9 +104,82 @@ const char
 	case TF_IDENT_TYPE_L2_FUNC:
 		return "l2_func";
 	default:
-		break;
+		return "Invalid identifier";
+	}
+}
+
+const char
+*tf_tcam_tbl_2_str(enum tf_tcam_tbl_type tcam_type)
+{
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		return "l2_ctxt_tcam";
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		return "prof_tcam";
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		return "wc_tcam";
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		return "veb_tcam";
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		return "sp_tcam";
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		return "ct_rule_tcam";
+	default:
+		return "Invalid tcam table type";
+	}
+}
+
+const char
+*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)
+{
+	switch (hw_type) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		return "L2 ctxt tcam";
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		return "Profile Func";
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		return "Profile tcam";
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		return "EM profile id";
+	case TF_RESC_TYPE_HW_EM_REC:
+		return "EM record";
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		return "WC tcam profile id";
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		return "WC tcam";
+	case TF_RESC_TYPE_HW_METER_PROF:
+		return "Meter profile";
+	case TF_RESC_TYPE_HW_METER_INST:
+		return "Meter instance";
+	case TF_RESC_TYPE_HW_MIRROR:
+		return "Mirror";
+	case TF_RESC_TYPE_HW_UPAR:
+		return "UPAR";
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		return "Source properties tcam";
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		return "L2 Function";
+	case TF_RESC_TYPE_HW_FKB:
+		return "FKB";
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		return "Table scope";
+	case TF_RESC_TYPE_HW_EPOCH0:
+		return "EPOCH0";
+	case TF_RESC_TYPE_HW_EPOCH1:
+		return "EPOCH1";
+	case TF_RESC_TYPE_HW_METADATA:
+		return "Metadata";
+	case TF_RESC_TYPE_HW_CT_STATE:
+		return "Connection tracking state";
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		return "Range profile";
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		return "Range entry";
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		return "LAG";
+	default:
+		return "Invalid identifier";
 	}
-	return "Invalid identifier";
 }
 
 const char
@@ -145,6 +218,93 @@ const char
 }
 
 /**
+ * Helper function to perform a HW HCAPI resource type lookup against
+ * the reserved value of the same static type.
+ *
+ * Returns:
+ *   -EOPNOTSUPP - Reserved resource type not supported
+ *   Value       - Integer value of the reserved value for the requested type
+ */
+static int
+tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)
+{
+	uint32_t value = -EOPNOTSUPP;
+
+	switch (index) {
+	case TF_RESC_TYPE_HW_L2_CTXT_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_PROF_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_EM_REC:
+		TF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);
+		break;
+	case TF_RESC_TYPE_HW_WC_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_METER_INST:
+		TF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);
+		break;
+	case TF_RESC_TYPE_HW_MIRROR:
+		TF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);
+		break;
+	case TF_RESC_TYPE_HW_UPAR:
+		TF_RESC_RSVD(dir, TF_RSVD_UPAR, value);
+		break;
+	case TF_RESC_TYPE_HW_SP_TCAM:
+		TF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);
+		break;
+	case TF_RESC_TYPE_HW_L2_FUNC:
+		TF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);
+		break;
+	case TF_RESC_TYPE_HW_FKB:
+		TF_RESC_RSVD(dir, TF_RSVD_FKB, value);
+		break;
+	case TF_RESC_TYPE_HW_TBL_SCOPE:
+		TF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH0:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);
+		break;
+	case TF_RESC_TYPE_HW_EPOCH1:
+		TF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);
+		break;
+	case TF_RESC_TYPE_HW_METADATA:
+		TF_RESC_RSVD(dir, TF_RSVD_METADATA, value);
+		break;
+	case TF_RESC_TYPE_HW_CT_STATE:
+		TF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_PROF:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);
+		break;
+	case TF_RESC_TYPE_HW_RANGE_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);
+		break;
+	case TF_RESC_TYPE_HW_LAG_ENTRY:
+		TF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);
+		break;
+	default:
+		break;
+	}
+
+	return value;
+}
+
+/**
  * Helper function to perform a SRAM HCAPI resource type lookup
  * against the reserved value of the same static type.
  *
@@ -205,6 +365,36 @@ tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)
 }
 
 /**
+ * Helper function to print all the HW resource qcaps errors reported
+ * in the error_flag.
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] error_flag
+ *   Pointer to the hw error flags created at time of the query check
+ */
+static void
+tf_rm_print_hw_qcaps_error(enum tf_dir dir,
+			   struct tf_rm_hw_query *hw_query,
+			   uint32_t *error_flag)
+{
+	int i;
+
+	PMD_DRV_LOG(ERR, "QCAPS errors HW\n");
+	PMD_DRV_LOG(ERR, "  Direction: %s\n", tf_dir_2_str(dir));
+	PMD_DRV_LOG(ERR, "  Elements:\n");
+
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (*error_flag & 1 << i)
+			PMD_DRV_LOG(ERR, "    %s, %d elem available, req:%d\n",
+				    tf_hcapi_hw_2_str(i),
+				    hw_query->hw_query[i].max,
+				    tf_rm_rsvd_hw_value(dir, i));
+	}
+}
+
+/**
  * Helper function to print all the SRAM resource qcaps errors
  * reported in the error_flag.
  *
@@ -264,12 +454,139 @@ tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,
 			    uint32_t *error_flag)
 {
 	*error_flag = 0;
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_CTXT_TCAM,
+			     TF_RSVD_L2_CTXT_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_FUNC,
+			     TF_RSVD_PROF_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_PROF_TCAM,
+			     TF_RSVD_PROF_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_PROF_ID,
+			     TF_RSVD_EM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EM_REC,
+			     TF_RSVD_EM_REC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,
+			     TF_RSVD_WC_TCAM_PROF_ID,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_WC_TCAM,
+			     TF_RSVD_WC_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_PROF,
+			     TF_RSVD_METER_PROF,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METER_INST,
+			     TF_RSVD_METER_INST,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_MIRROR,
+			     TF_RSVD_MIRROR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_UPAR,
+			     TF_RSVD_UPAR,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_SP_TCAM,
+			     TF_RSVD_SP_TCAM,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_L2_FUNC,
+			     TF_RSVD_L2_FUNC,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_FKB,
+			     TF_RSVD_FKB,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_TBL_SCOPE,
+			     TF_RSVD_TBL_SCOPE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH0,
+			     TF_RSVD_EPOCH0,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_EPOCH1,
+			     TF_RSVD_EPOCH1,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_METADATA,
+			     TF_RSVD_METADATA,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_CT_STATE,
+			     TF_RSVD_CT_STATE,
+			     error_flag);
+
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_RANGE_PROF,
+			     TF_RSVD_RANGE_PROF,
+			     error_flag);
+
 	TF_RM_CHECK_HW_ALLOC(query,
 			     dir,
 			     TF_RESC_TYPE_HW_RANGE_ENTRY,
 			     TF_RSVD_RANGE_ENTRY,
 			     error_flag);
 
+	TF_RM_CHECK_HW_ALLOC(query,
+			     dir,
+			     TF_RESC_TYPE_HW_LAG_ENTRY,
+			     TF_RSVD_LAG_ENTRY,
+			     error_flag);
+
 	if (*error_flag != 0)
 		return -ENOMEM;
 
@@ -434,26 +751,584 @@ tf_rm_reserve_range(uint32_t count,
 			for (i = 0; i < rsv_begin; i++)
 				ba_alloc_index(pool, i);
 
-			/* Skip and then do the remaining */
-			if (rsv_end < max - 1) {
-				for (i = rsv_end; i < max; i++)
-					ba_alloc_index(pool, i);
-			}
-		}
-	}
+			/* Skip and then do the remaining */
+			if (rsv_end < max - 1) {
+				for (i = rsv_end; i < max; i++)
+					ba_alloc_index(pool, i);
+			}
+		}
+	}
+}
+
+/**
+ * Internal function to mark all the l2 ctxt allocated that Truflow
+ * does not own.
+ */
+static void
+tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t end = 0;
+
+	/* l2 ctxt rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+
+	/* l2 ctxt tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_CTXT_TCAM,
+			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the profile tcam and profile func
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;
+	uint32_t end = 0;
+
+	/* profile func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_RX);
+
+	/* profile func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_FUNC,
+			    tfs->TF_PROF_FUNC_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_PROF_TCAM;
+
+	/* profile tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_RX);
+
+	/* profile tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_PROF_TCAM,
+			    tfs->TF_PROF_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the em profile id allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_em_prof(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;
+	uint32_t end = 0;
+
+	/* em prof id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_RX);
+
+	/* em prof id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EM_PROF_ID,
+			    tfs->TF_EM_PROF_ID_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the wildcard tcam and profile id
+ * resources that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_wc(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;
+	uint32_t end = 0;
+
+	/* wc profile id rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);
+
+	/* wc profile id tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_PROF_ID,
+			    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_WC_TCAM;
+
+	/* wc tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_RX);
+
+	/* wc tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_WC_TCAM_ROW,
+			    tfs->TF_WC_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the meter resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_meter(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METER_PROF;
+	uint32_t end = 0;
+
+	/* meter profiles rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_RX);
+
+	/* meter profiles tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER_PROF,
+			    tfs->TF_METER_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_METER_INST;
+
+	/* meter rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_RX);
+
+	/* meter tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METER,
+			    tfs->TF_METER_INST_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the mirror resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_mirror(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_MIRROR;
+	uint32_t end = 0;
+
+	/* mirror rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_RX);
+
+	/* mirror tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_MIRROR,
+			    tfs->TF_MIRROR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the upar resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_upar(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_UPAR;
+	uint32_t end = 0;
+
+	/* upar rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_RX);
+
+	/* upar tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_UPAR,
+			    tfs->TF_UPAR_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the sp tcam resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_sp_tcam(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_SP_TCAM;
+	uint32_t end = 0;
+
+	/* sp tcam rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_RX);
+
+	/* sp tcam tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_SP_TCAM,
+			    tfs->TF_SP_TCAM_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 func resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_l2_func(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t end = 0;
+
+	/* l2 func rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+
+	/* l2 func tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_L2_FUNC,
+			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the fkb resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_fkb(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_FKB;
+	uint32_t end = 0;
+
+	/* fkb rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_RX);
+
+	/* fkb tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_FKB,
+			    tfs->TF_FKB_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the tbld scope resources allocated
+ * that Truflow does not own.
+ */
+static void
+tf_rm_rsvd_tbl_scope(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;
+	uint32_t end = 0;
+
+	/* tbl scope rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_RX);
+
+	/* tbl scope tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_TBL_SCOPE,
+			    tfs->TF_TBL_SCOPE_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the l2 epoch resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_epoch(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_EPOCH0;
+	uint32_t end = 0;
+
+	/* epoch0 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_RX);
+
+	/* epoch0 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH0,
+			    tfs->TF_EPOCH0_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_EPOCH1;
+
+	/* epoch1 rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_RX);
+
+	/* epoch1 tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_EPOCH1,
+			    tfs->TF_EPOCH1_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the metadata resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_metadata(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_METADATA;
+	uint32_t end = 0;
+
+	/* metadata rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_RX);
+
+	/* metadata tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_METADATA,
+			    tfs->TF_METADATA_POOL_NAME_TX);
+}
+
+/**
+ * Internal function to mark all the ct state resources allocated that
+ * Truflow does not own.
+ */
+static void
+tf_rm_rsvd_ct_state(struct tf_session *tfs)
+{
+	uint32_t index = TF_RESC_TYPE_HW_CT_STATE;
+	uint32_t end = 0;
+
+	/* ct state rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_RX);
+
+	/* ct state tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_CT_STATE,
+			    tfs->TF_CT_STATE_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 ctxt allocated that Truflow
- * does not own.
+ * Internal function to mark all the range resources allocated that
+ * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
+tf_rm_rsvd_range(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;
+	uint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;
 	uint32_t end = 0;
 
-	/* l2 ctxt rx direction */
+	/* range profile rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -461,10 +1336,10 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_RX);
 
-	/* l2 ctxt tx direction */
+	/* range profile tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -472,21 +1347,45 @@ tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_CTXT_TCAM,
-			    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);
+			    TF_NUM_RANGE_PROF,
+			    tfs->TF_RANGE_PROF_POOL_NAME_TX);
+
+	index = TF_RESC_TYPE_HW_RANGE_ENTRY;
+
+	/* range entry rx direction */
+	if (tfs->resc.rx.hw_entry[index].stride > 0)
+		end = tfs->resc.rx.hw_entry[index].start +
+			tfs->resc.rx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
+			    tfs->resc.rx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);
+
+	/* range entry tx direction */
+	if (tfs->resc.tx.hw_entry[index].stride > 0)
+		end = tfs->resc.tx.hw_entry[index].start +
+			tfs->resc.tx.hw_entry[index].stride - 1;
+
+	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
+			    tfs->resc.tx.hw_entry[index].start,
+			    end,
+			    TF_NUM_RANGE_ENTRY,
+			    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);
 }
 
 /**
- * Internal function to mark all the l2 func resources allocated that
+ * Internal function to mark all the lag resources allocated that
  * Truflow does not own.
  */
 static void
-tf_rm_rsvd_l2_func(struct tf_session *tfs)
+tf_rm_rsvd_lag_entry(struct tf_session *tfs)
 {
-	uint32_t index = TF_RESC_TYPE_HW_L2_FUNC;
+	uint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;
 	uint32_t end = 0;
 
-	/* l2 func rx direction */
+	/* lag entry rx direction */
 	if (tfs->resc.rx.hw_entry[index].stride > 0)
 		end = tfs->resc.rx.hw_entry[index].start +
 			tfs->resc.rx.hw_entry[index].stride - 1;
@@ -494,10 +1393,10 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,
 			    tfs->resc.rx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_RX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_RX);
 
-	/* l2 func tx direction */
+	/* lag entry tx direction */
 	if (tfs->resc.tx.hw_entry[index].stride > 0)
 		end = tfs->resc.tx.hw_entry[index].start +
 			tfs->resc.tx.hw_entry[index].stride - 1;
@@ -505,8 +1404,8 @@ tf_rm_rsvd_l2_func(struct tf_session *tfs)
 	tf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,
 			    tfs->resc.tx.hw_entry[index].start,
 			    end,
-			    TF_NUM_L2_FUNC,
-			    tfs->TF_L2_FUNC_POOL_NAME_TX);
+			    TF_NUM_LAG_ENTRY,
+			    tfs->TF_LAG_ENTRY_POOL_NAME_TX);
 }
 
 /**
@@ -909,7 +1808,21 @@ tf_rm_reserve_hw(struct tf *tfp)
 	 * used except the resources that Truflow took ownership off.
 	 */
 	tf_rm_rsvd_l2_ctxt(tfs);
+	tf_rm_rsvd_prof(tfs);
+	tf_rm_rsvd_em_prof(tfs);
+	tf_rm_rsvd_wc(tfs);
+	tf_rm_rsvd_mirror(tfs);
+	tf_rm_rsvd_meter(tfs);
+	tf_rm_rsvd_upar(tfs);
+	tf_rm_rsvd_sp_tcam(tfs);
 	tf_rm_rsvd_l2_func(tfs);
+	tf_rm_rsvd_fkb(tfs);
+	tf_rm_rsvd_tbl_scope(tfs);
+	tf_rm_rsvd_epoch(tfs);
+	tf_rm_rsvd_metadata(tfs);
+	tf_rm_rsvd_ct_state(tfs);
+	tf_rm_rsvd_range(tfs);
+	tf_rm_rsvd_lag_entry(tfs);
 }
 
 /**
@@ -972,6 +1885,7 @@ tf_rm_allocate_validate_hw(struct tf *tfp,
 			"%s, HW QCAPS validation failed, error_flag:0x%x\n",
 			tf_dir_2_str(dir),
 			error_flag);
+		tf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);
 		goto cleanup;
 	}
 
@@ -1032,65 +1946,388 @@ tf_rm_allocate_validate_sram(struct tf *tfp,
 	struct tf_rm_entry *sram_entries;
 	uint32_t error_flag;
 
-	if (dir == TF_DIR_RX)
-		sram_entries = tfs->resc.rx.sram_entry;
-	else
-		sram_entries = tfs->resc.tx.sram_entry;
+	if (dir == TF_DIR_RX)
+		sram_entries = tfs->resc.rx.sram_entry;
+	else
+		sram_entries = tfs->resc.tx.sram_entry;
+
+	/* Query for Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM qcaps message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
+			tf_dir_2_str(dir),
+			error_flag);
+		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
+		goto cleanup;
+	}
+
+	/* Post process SRAM capability */
+	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
+		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+
+	/* Allocate Session SRAM Resources */
+	rc = tf_msg_session_sram_resc_alloc(tfp,
+					    dir,
+					    &sram_alloc,
+					    sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM alloc message send failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	/* Perform SRAM allocation validation as its possible the
+	 * resource availability changed between qcaps and alloc
+	 */
+	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "%s, SRAM Resource allocation validation failed\n",
+			    tf_dir_2_str(dir));
+		goto cleanup;
+	}
+
+	return 0;
+
+ cleanup:
+	return -1;
+}
+
+/**
+ * Helper function used to prune a HW resource array to only hold
+ * elements that needs to be flushed.
+ *
+ * [in] tfs
+ *   Session handle
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * [in] hw_entries
+ *   Master HW Resource database
+ *
+ * [in/out] flush_entries
+ *   Pruned HW Resource database of entries to be flushed. This
+ *   array should be passed in as a complete copy of the master HW
+ *   Resource database. The outgoing result will be a pruned version
+ *   based on the result of the requested checking
+ *
+ * Returns:
+ *    0 - Success, no flush required
+ *    1 - Success, flush required
+ *   -1 - Internal error
+ */
+static int
+tf_rm_hw_to_flush(struct tf_session *tfs,
+		  enum tf_dir dir,
+		  struct tf_rm_entry *hw_entries,
+		  struct tf_rm_entry *flush_entries)
+{
+	int rc;
+	int flush_rc = 0;
+	int free_cnt;
+	struct bitalloc *pool;
+
+	/* Check all the hw resource pools and check for left over
+	 * elements. Any found will result in the complete pool of a
+	 * type to get invalidated.
+	 */
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_CTXT_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_PROF_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;
+	flush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_PROF_ID_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_WC_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METER_INST_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_MIRROR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_UPAR_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {
+		flush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_SP_TCAM_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_L2_FUNC_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_FKB_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {
+		flush_entries[TF_RESC_TYPE_HW_FKB].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Query for Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_qcaps(tfp, dir, &sram_query);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM qcaps message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_TBL_SCOPE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;
+	} else {
+		PMD_DRV_LOG(ERR, "%s: TBL_SCOPE free_cnt:%d, entries:%d\n",
+			    tf_dir_2_str(dir),
+			    free_cnt,
+			    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);
+		flush_rc = 1;
 	}
 
-	rc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			"%s, SRAM QCAPS validation failed, error_flag:%x\n",
-			tf_dir_2_str(dir),
-			error_flag);
-		tf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH0_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Post process SRAM capability */
-	for (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)
-		sram_alloc.sram_num[i] = sram_query.sram_query[i].max;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_EPOCH1_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
-	/* Allocate Session SRAM Resources */
-	rc = tf_msg_session_sram_resc_alloc(tfp,
-					    dir,
-					    &sram_alloc,
-					    sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM alloc message send failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_METADATA_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {
+		flush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	/* Perform SRAM allocation validation as its possible the
-	 * resource availability changed between qcaps and alloc
-	 */
-	rc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);
-	if (rc) {
-		/* Log error */
-		PMD_DRV_LOG(ERR,
-			    "%s, SRAM Resource allocation validation failed\n",
-			    tf_dir_2_str(dir));
-		goto cleanup;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_CT_STATE_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;
+	} else {
+		flush_rc = 1;
 	}
 
-	return 0;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_PROF_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
 
- cleanup:
-	return -1;
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_RANGE_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	TF_RM_GET_POOLS(tfs, dir, &pool,
+			TF_LAG_ENTRY_POOL_NAME,
+			rc);
+	if (rc)
+		return rc;
+	free_cnt = ba_free_count(pool);
+	if (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;
+		flush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;
+	} else {
+		flush_rc = 1;
+	}
+
+	return flush_rc;
 }
 
 /**
@@ -1335,6 +2572,32 @@ tf_rm_sram_to_flush(struct tf_session *tfs,
 }
 
 /**
+ * Helper function used to generate an error log for the HW types that
+ * needs to be flushed. The types should have been cleaned up ahead of
+ * invoking tf_close_session.
+ *
+ * [in] hw_entries
+ *   HW Resource database holding elements to be flushed
+ */
+static void
+tf_rm_log_hw_flush(enum tf_dir dir,
+		   struct tf_rm_entry *hw_entries)
+{
+	int i;
+
+	/* Walk the hw flush array and log the types that wasn't
+	 * cleaned up.
+	 */
+	for (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {
+		if (hw_entries[i].stride != 0)
+			PMD_DRV_LOG(ERR,
+				    "%s: %s was not cleaned up\n",
+				    tf_dir_2_str(dir),
+				    tf_hcapi_hw_2_str(i));
+	}
+}
+
+/**
  * Helper function used to generate an error log for the SRAM types
  * that needs to be flushed. The types should have been cleaned up
  * ahead of invoking tf_close_session.
@@ -1386,6 +2649,53 @@ tf_rm_init(struct tf *tfp __rte_unused)
 	/* Initialization of HW Resource Pools */
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);
 	ba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);
+	ba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);
+
+	/* TBD, how do we want to handle EM records ?*/
+	/* EM Records should not be controlled by way of a pool */
+
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);
+	ba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);
+	ba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);
+
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);
+	ba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);
+
+	ba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);
+	ba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);
+
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);
+	ba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);
+	ba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);
+	ba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);
 
 	/* Initialization of SRAM Resource Pools
 	 * These pools are set to the TFLIB defined MAX sizes not
@@ -1476,6 +2786,7 @@ tf_rm_close(struct tf *tfp)
 	int rc_close = 0;
 	int i;
 	struct tf_rm_entry *hw_entries;
+	struct tf_rm_entry *hw_flush_entries;
 	struct tf_rm_entry *sram_entries;
 	struct tf_rm_entry *sram_flush_entries;
 	struct tf_session *tfs __rte_unused =
@@ -1501,14 +2812,41 @@ tf_rm_close(struct tf *tfp)
 	for (i = 0; i < TF_DIR_MAX; i++) {
 		if (i == TF_DIR_RX) {
 			hw_entries = tfs->resc.rx.hw_entry;
+			hw_flush_entries = flush_resc.rx.hw_entry;
 			sram_entries = tfs->resc.rx.sram_entry;
 			sram_flush_entries = flush_resc.rx.sram_entry;
 		} else {
 			hw_entries = tfs->resc.tx.hw_entry;
+			hw_flush_entries = flush_resc.tx.hw_entry;
 			sram_entries = tfs->resc.tx.sram_entry;
 			sram_flush_entries = flush_resc.tx.sram_entry;
 		}
 
+		/* Check for any not previously freed HW resources and
+		 * flush if required.
+		 */
+		rc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);
+		if (rc) {
+			rc_close = -ENOTEMPTY;
+			/* Log error */
+			PMD_DRV_LOG(ERR,
+				    "%s, lingering HW resources\n",
+				    tf_dir_2_str(i));
+
+			/* Log the entries to be flushed */
+			tf_rm_log_hw_flush(i, hw_flush_entries);
+			rc = tf_msg_session_hw_resc_flush(tfp,
+							  i,
+							  hw_flush_entries);
+			if (rc) {
+				rc_close = rc;
+				/* Log error */
+				PMD_DRV_LOG(ERR,
+					    "%s, HW flush failed\n",
+					    tf_dir_2_str(i));
+			}
+		}
+
 		/* Check for any not previously freed SRAM resources
 		 * and flush if required.
 		 */
@@ -1560,6 +2898,234 @@ tf_rm_close(struct tf *tfp)
 	return rc_close;
 }
 
+#if (TF_SHADOW == 1)
+int
+tf_rm_shadow_db_init(struct tf_session *tfs)
+{
+	rc = 1;
+
+	return rc;
+}
+#endif /* TF_SHADOW */
+
+int
+tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,
+			    enum tf_dir dir,
+			    enum tf_tcam_tbl_type type,
+			    struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_L2_CTXT_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_PROF_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_WC_TCAM_POOL_NAME,
+				rc);
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Tcam type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "%s:, Tcam type lookup failed, type:%d\n",
+			    tf_dir_2_str(dir),
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
+int
+tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,
+			   enum tf_dir dir,
+			   enum tf_tbl_type type,
+			   struct bitalloc **pool)
+{
+	int rc = -EOPNOTSUPP;
+
+	*pool = NULL;
+
+	switch (type) {
+	case TF_TBL_TYPE_FULL_ACT_RECORD:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_FULL_ACTION_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MCAST_GROUPS:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_TX)
+			break;
+		TF_RM_GET_POOLS_RX(tfs, pool,
+				   TF_SRAM_MCG_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_8B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_8B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_16B:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_ENCAP_16B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_ENCAP_64B:
+		/* No pools for RX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_ENCAP_64B_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_SP_SMAC_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+		/* No pools for TX direction, so bail out */
+		if (dir == TF_DIR_RX)
+			break;
+		TF_RM_GET_POOLS_TX(tfs, pool,
+				   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);
+		rc = 0;
+		break;
+	case TF_TBL_TYPE_ACT_STATS_64:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_STATS_64B_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_SPORT_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_S_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_SRAM_NAT_D_IPV4_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METER_INST:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METER_INST_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_MIRROR_CONFIG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_MIRROR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_UPAR:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_UPAR_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH0:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH0_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_EPOCH1:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_EPOCH1_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_METADATA:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_METADATA_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_CT_STATE:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_CT_STATE_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_PROF:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_PROF_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_RANGE_ENTRY:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_RANGE_ENTRY_POOL_NAME,
+				rc);
+		break;
+	case TF_TBL_TYPE_LAG:
+		TF_RM_GET_POOLS(tfs, dir, pool,
+				TF_LAG_ENTRY_POOL_NAME,
+				rc);
+		break;
+	/* Not yet supported */
+	case TF_TBL_TYPE_ACT_ENCAP_32B:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:
+	case TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:
+	case TF_TBL_TYPE_VNIC_SVIF:
+		break;
+	/* No bitalloc pools for these types */
+	case TF_TBL_TYPE_EXT:
+	case TF_TBL_TYPE_EXT_0:
+	default:
+		break;
+	}
+
+	if (rc == -EOPNOTSUPP) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type not supported, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	} else if (rc == -1) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Table type lookup failed, type:%d\n",
+			    dir,
+			    type);
+		return rc;
+	}
+
+	return 0;
+}
+
 int
 tf_rm_convert_tbl_type(enum tf_tbl_type type,
 		       uint32_t *hcapi_type)
diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h
index 34b6c41..fed34f1 100644
--- a/drivers/net/bnxt/tf_core/tf_session.h
+++ b/drivers/net/bnxt/tf_core/tf_session.h
@@ -12,6 +12,7 @@
 #include "bitalloc.h"
 #include "tf_core.h"
 #include "tf_rm.h"
+#include "tf_tbl.h"
 
 /** Session defines
  */
@@ -285,6 +286,15 @@ struct tf_session {
 
 	/** Lookup3 init values */
 	uint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];
+
+	/** Table scope array */
+	struct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];
+
+	/** Each external pool is associated with a single table scope
+	 *  For each external pool store the associated table scope in
+	 *  this data structure
+	 */
+	uint32_t ext_pool_2_scope[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
 #endif /* _TF_SESSION_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
new file mode 100644
index 0000000..5a5e72f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_TBL_H_
+#define _TF_TBL_H_
+
+#include <stdint.h>
+
+enum tf_pg_tbl_lvl {
+	PT_LVL_0,
+	PT_LVL_1,
+	PT_LVL_2,
+	PT_LVL_MAX
+};
+
+/** Invalid table scope id */
+#define TF_TBL_SCOPE_INVALID 0xffffffff
+
+/**
+ * Table Scope Control Block
+ *
+ * Holds private data for a table scope. Only one instance of a table
+ * scope with Internal EM is supported.
+ */
+struct tf_tbl_scope_cb {
+	uint32_t tbl_scope_id;
+	int index;
+	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
+};
+
+/**
+ * Initialize table pool structure to indicate
+ * no table scope has been associated with the
+ * external pool of indexes.
+ *
+ * [in] session
+ */
+void
+tf_init_tbl_pool(struct tf_session *session);
+
+#endif /* _TF_TBL_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 09/34] net/bnxt: add tf core identifier support
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (7 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
                         ` (27 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith

From: Farah Smith <farah.smith@broadcom.com>

- Add TruFlow Identifier resource support
- Add TruFlow public API for Identifier resources.
- Add support code and stack for Identifier resource allocation control.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 156 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h |  55 +++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  13 ++++
 3 files changed, 224 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index bb6d38b..7b027f7 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -284,3 +284,159 @@ tf_close_session(struct tf *tfp)
 
 	return rc_close;
 }
+
+/** allocate identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	int id;
+	int rc;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: identifier pool %s failure\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	id = ba_alloc(session_pool);
+
+	if (id == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return -ENOMEM;
+	}
+	parms->id = id;
+	return 0;
+}
+
+/** free identifier resource
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms)
+{
+	struct bitalloc *session_pool;
+	int rc;
+	int ba_rc;
+	struct tf_session *tfs;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	switch (parms->ident_type) {
+	case TF_IDENT_TYPE_L2_CTXT:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_L2_CTXT_REMAP_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_PROF_FUNC:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_PROF_FUNC_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_EM_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_EM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_WC_PROF:
+		TF_RM_GET_POOLS(tfs, parms->dir, &session_pool,
+				TF_WC_TCAM_PROF_ID_POOL_NAME,
+				rc);
+		break;
+	case TF_IDENT_TYPE_L2_FUNC:
+		PMD_DRV_LOG(ERR, "%s: unsupported %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		PMD_DRV_LOG(ERR, "%s: invalid %s\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		rc = -EINVAL;
+		break;
+	}
+	if (rc) {
+		PMD_DRV_LOG(ERR, "%s: %s Identifier pool access failed\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type));
+		return rc;
+	}
+
+	ba_rc = ba_inuse(session_pool, (int)parms->id);
+
+	if (ba_rc == BA_FAIL || ba_rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_ident_2_str(parms->ident_type),
+			    parms->id);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->id);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 16c8251..afad9ea 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -402,6 +402,61 @@ enum tf_identifier_type {
 	TF_IDENT_TYPE_L2_FUNC
 };
 
+/** tf_alloc_identifier parameter definition
+ */
+struct tf_alloc_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [out] Identifier allocated
+	 */
+	uint16_t id;
+};
+
+/** tf_free_identifier parameter definition
+ */
+struct tf_free_identifier_parms {
+	/**
+	 * [in]	 receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Identifier type
+	 */
+	enum tf_identifier_type ident_type;
+	/**
+	 * [in] ID to free
+	 */
+	uint16_t id;
+};
+
+/** allocate identifier resource
+ *
+ * TruFlow core will allocate a free id from the per identifier resource type
+ * pool reserved for the session during tf_open().  No firmware is involved.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_identifier(struct tf *tfp,
+			struct tf_alloc_identifier_parms *parms);
+
+/** free identifier resource
+ *
+ * TruFlow core will return an id back to the per identifier resource type pool
+ * reserved for the session.  No firmware is involved.  During tf_close, the
+ * complete pool is returned to the firmware.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_identifier(struct tf *tfp,
+		       struct tf_free_identifier_parms *parms);
+
 /**
  * TCAM table type
  */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 4ce2bc5..c44f96f 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -94,6 +94,19 @@
 } while (0)
 
 /**
+ * This is the MAX data we can transport across regular HWRM
+ */
+#define TF_PCI_BUF_SIZE_MAX 88
+
+/**
+ * If data bigger than TF_PCI_BUF_SIZE_MAX then use DMA method
+ */
+struct tf_msg_dma_buf {
+	void *va_addr;
+	uint64_t pa_addr;
+};
+
+/**
  * Sends session open request to TF Firmware
  */
 int
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 10/34] net/bnxt: add tf core TCAM support
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (8 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
                         ` (26 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Shahaji Bhosle, Jay Ding

From: Shahaji Bhosle <sbhosle@broadcom.com>

- Add TruFlow TCAM public API functions
- Add TCAM support functions as well as public APIs.

Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_core.c | 163 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_core.h | 227 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  | 159 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  30 +++++
 4 files changed, 579 insertions(+)

diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 7b027f7..39f4a11 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -440,3 +440,166 @@ int tf_free_identifier(struct tf *tfp,
 
 	return 0;
 }
+
+int
+tf_alloc_tcam_entry(struct tf *tfp,
+		    struct tf_alloc_tcam_entry_parms *parms)
+{
+	int rc;
+	int index;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	index = ba_alloc(session_pool);
+	if (index == BA_FAIL) {
+		PMD_DRV_LOG(ERR, "%s: %s: No resource available\n",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type));
+		return -ENOMEM;
+	}
+
+	parms->idx = index;
+	return 0;
+}
+
+int
+tf_set_tcam_entry(struct tf *tfp,
+		  struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	int id;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/*
+	 * Each tcam send msg function should check for key sizes range
+	 */
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, parms->idx);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "%s: %s: Invalid or not allocated index, idx:%d\n",
+		   tf_dir_2_str(parms->dir),
+		   tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+		   parms->idx);
+		return -EINVAL;
+	}
+
+	rc = tf_msg_tcam_entry_set(tfp, parms);
+
+	return rc;
+}
+
+int
+tf_get_tcam_entry(struct tf *tfp __rte_unused,
+		  struct tf_get_tcam_entry_parms *parms __rte_unused)
+{
+	int rc = -EOPNOTSUPP;
+
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "%s, Session info invalid\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	return rc;
+}
+
+int
+tf_free_tcam_entry(struct tf *tfp,
+		   struct tf_free_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tf_session *tfs;
+	struct bitalloc *session_pool;
+
+	if (parms == NULL || tfp == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR, "%s: Session error\n",
+			    tf_dir_2_str(parms->dir));
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	rc = tf_rm_lookup_tcam_type_pool(tfs,
+					 parms->dir,
+					 parms->tcam_tbl_type,
+					 &session_pool);
+	/* Error logging handled by tf_rm_lookup_tcam_type_pool */
+	if (rc)
+		return rc;
+
+	rc = ba_inuse(session_pool, (int)parms->idx);
+	if (rc == BA_FAIL || rc == BA_ENTRY_FREE) {
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d already free",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+		return -EINVAL;
+	}
+
+	ba_free(session_pool, (int)parms->idx);
+
+	rc = tf_msg_tcam_entry_free(tfp, parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR, "%s: %s: Entry %d free failed",
+			    tf_dir_2_str(parms->dir),
+			    tf_tcam_tbl_2_str(parms->tcam_tbl_type),
+			    parms->idx);
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index afad9ea..1431d06 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -472,6 +472,233 @@ enum tf_tcam_tbl_type {
 };
 
 /**
+ * @page tcam TCAM Access
+ *
+ * @ref tf_alloc_tcam_entry
+ *
+ * @ref tf_set_tcam_entry
+ *
+ * @ref tf_get_tcam_entry
+ *
+ * @ref tf_free_tcam_entry
+ */
+
+/** tf_alloc_tcam_entry parameter definition
+ */
+struct tf_alloc_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Enable search for matching entry
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Key data to match on (if search)
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] Mask data to match on (if search)
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] Priority of entry requested (definition TBD)
+	 */
+	uint32_t priority;
+	/**
+	 * [out] If search, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current refcnt after allocation
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx allocated
+	 *
+	 */
+	uint16_t idx;
+};
+
+/** allocate TCAM entry
+ *
+ * Allocate a TCAM entry - one of these types:
+ *
+ * L2 Context
+ * Profile TCAM
+ * WC TCAM
+ * VEB TCAM
+ *
+ * This function allocates a TCAM table record.	 This function
+ * will attempt to allocate a TCAM table entry from the session
+ * owned TCAM entries or search a shadow copy of the TCAM table for a
+ * matching entry if search is enabled.	 Key, mask and result must match for
+ * hit to be set.  Only TruFlow core data is accessed.
+ * A hash table to entry mapping is maintained for search purposes.  If
+ * search is not enabled, the first available free entry is returned based
+ * on priority and alloc_cnt is set to 1.  If search is enabled and a matching
+ * entry to entry_data is found, hit is set to TRUE and alloc_cnt is set to 1.
+ * RefCnt is also returned.
+ *
+ * Also returns success or failure code.
+ */
+int tf_alloc_tcam_entry(struct tf *tfp,
+			struct tf_alloc_tcam_entry_parms *parms);
+
+/** tf_set_tcam_entry parameter definition
+ */
+struct	tf_set_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] base index of the entry to program
+	 */
+	uint16_t idx;
+	/**
+	 * [in] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [in] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [in] key size in bits (if search)
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [in] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** set TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tcam_entry(struct tf	*tfp,
+		      struct tf_set_tcam_entry_parms *parms);
+
+/** tf_get_tcam_entry parameter definition
+ */
+struct tf_get_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type  tcam_tbl_type;
+	/**
+	 * [in] index of the entry to get
+	 */
+	uint16_t idx;
+	/**
+	 * [out] struct containing key
+	 */
+	uint8_t *key;
+	/**
+	 * [out] struct containing mask fields
+	 */
+	uint8_t *mask;
+	/**
+	 * [out] key size in bits
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [out] struct containing result
+	 */
+	uint8_t *result;
+	/**
+	 * [out] struct containing result size in bits
+	 */
+	uint16_t result_sz_in_bits;
+};
+
+/** get TCAM entry
+ *
+ * Program a TCAM table entry for a TruFlow session.
+ *
+ * If the entry has not been allocated, an error will be returned.
+ *
+ * Returns success or failure code.
+ */
+int tf_get_tcam_entry(struct tf *tfp,
+		      struct tf_get_tcam_entry_parms *parms);
+
+/** tf_free_tcam_entry parameter definition
+ */
+struct tf_free_tcam_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] TCAM table type
+	 */
+	enum tf_tcam_tbl_type tcam_tbl_type;
+	/**
+	 * [in] Index to free
+	 */
+	uint16_t idx;
+	/**
+	 * [out] reference count after free
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free TCAM entry
+ *
+ * Free TCAM entry.
+ *
+ * Firmware checks to ensure the TCAM entries are owned by the TruFlow
+ * session.  TCAM entry will be invalidated.  All-ones mask.
+ * writes to hw.
+ *
+ * WCTCAM profile id of 0 must be used to invalidate an entry.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tcam_entry(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *parms);
+
+/**
+ * @page table Table Access
+ *
+ * @ref tf_alloc_tbl_entry
+ *
+ * @ref tf_free_tbl_entry
+ *
+ * @ref tf_set_tbl_entry
+ *
+ * @ref tf_get_tbl_entry
+ */
+
+/**
  * Enumeration of TruFlow table types. A table type is used to identify a
  * resource object.
  *
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index c44f96f..9d17440 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -106,6 +106,39 @@ struct tf_msg_dma_buf {
 	uint64_t pa_addr;
 };
 
+static int
+tf_tcam_tbl_2_hwrm(enum tf_tcam_tbl_type tcam_type,
+		   uint32_t *hwrm_type)
+{
+	int rc = 0;
+
+	switch (tcam_type) {
+	case TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_L2_CTX_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_PROF_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_PROF_TCAM_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_WC_TCAM:
+		*hwrm_type = TF_DEV_DATA_TYPE_TF_WC_ENTRY;
+		break;
+	case TF_TCAM_TBL_TYPE_VEB_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_SP_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	case TF_TCAM_TBL_TYPE_CT_RULE_TCAM:
+		rc = -EOPNOTSUPP;
+		break;
+	default:
+		rc = -EOPNOTSUPP;
+		break;
+	}
+
+	return rc;
+}
+
 /**
  * Sends session open request to TF Firmware
  */
@@ -835,3 +868,129 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
+
+#define TF_BYTES_PER_SLICE(tfp) 12
+#define NUM_SLICES(tfp, bytes) \
+	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
+
+static int
+tf_msg_get_dma_buf(struct tf_msg_dma_buf *buf, int size)
+{
+	struct tfp_calloc_parms alloc_parms;
+	int rc;
+
+	/* Allocate session */
+	alloc_parms.nitems = 1;
+	alloc_parms.size = size;
+	alloc_parms.alignment = 0;
+	rc = tfp_calloc(&alloc_parms);
+	if (rc) {
+		/* Log error */
+		PMD_DRV_LOG(ERR,
+			    "Failed to allocate tcam dma entry, rc:%d\n",
+			    rc);
+		return -ENOMEM;
+	}
+
+	buf->pa_addr = (uintptr_t)alloc_parms.mem_pa;
+	buf->va_addr = alloc_parms.mem_va;
+
+	return 0;
+}
+
+int
+tf_msg_tcam_entry_set(struct tf *tfp,
+		      struct tf_set_tcam_entry_parms *parms)
+{
+	int rc;
+	struct tfp_send_msg_parms mparms = { 0 };
+	struct hwrm_tf_tcam_set_input req = { 0 };
+	struct hwrm_tf_tcam_set_output resp = { 0 };
+	uint16_t key_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->key_sz_in_bits);
+	uint16_t result_bytes =
+		TF_BITS2BYTES_WORD_ALIGN(parms->result_sz_in_bits);
+	struct tf_msg_dma_buf buf = { 0 };
+	uint8_t *data = NULL;
+	int data_size = 0;
+
+	rc = tf_tcam_tbl_2_hwrm(parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.idx = tfp_cpu_to_le_16(parms->idx);
+	if (parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DIR_TX;
+
+	req.key_size = key_bytes;
+	req.mask_offset = key_bytes;
+	/* Result follows after key and mask, thus multiply by 2 */
+	req.result_offset = 2 * key_bytes;
+	req.result_size = result_bytes;
+	data_size = 2 * req.key_size + req.result_size;
+
+	if (data_size <= TF_PCI_BUF_SIZE_MAX) {
+		/* use pci buffer */
+		data = &req.dev_data[0];
+	} else {
+		/* use dma buffer */
+		req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+		rc = tf_msg_get_dma_buf(&buf, data_size);
+		if (rc != 0)
+			return rc;
+		data = buf.va_addr;
+		memcpy(&req.dev_data[0], &buf.pa_addr, sizeof(buf.pa_addr));
+	}
+
+	memcpy(&data[0], parms->key, key_bytes);
+	memcpy(&data[key_bytes], parms->mask, key_bytes);
+	memcpy(&data[req.result_offset], parms->result, result_bytes);
+
+	mparms.tf_type = HWRM_TF_TCAM_SET;
+	mparms.req_data = (uint32_t *)&req;
+	mparms.req_size = sizeof(req);
+	mparms.resp_data = (uint32_t *)&resp;
+	mparms.resp_size = sizeof(resp);
+	mparms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &mparms);
+	if (rc)
+		return rc;
+
+	if (buf.va_addr != NULL)
+		tfp_free(buf.va_addr);
+
+	return rc;
+}
+
+int
+tf_msg_tcam_entry_free(struct tf *tfp,
+		       struct tf_free_tcam_entry_parms *in_parms)
+{
+	int rc;
+	struct hwrm_tf_tcam_free_input req =  { 0 };
+	struct hwrm_tf_tcam_free_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	/* Populate the request */
+	rc = tf_tcam_tbl_2_hwrm(in_parms->tcam_tbl_type, &req.type);
+	if (rc != 0)
+		return rc;
+
+	req.count = 1;
+	req.idx_list[0] = tfp_cpu_to_le_16(in_parms->idx);
+	if (in_parms->dir == TF_DIR_TX)
+		req.flags |= HWRM_TF_TCAM_FREE_INPUT_FLAGS_DIR_TX;
+
+	parms.tf_type = HWRM_TF_TCAM_FREE;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 057de84..fa74d78 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -120,4 +120,34 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   enum tf_dir dir,
 				   struct tf_rm_entry *sram_entry);
 
+/**
+ * Sends tcam entry 'set' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to set parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_set(struct tf *tfp,
+			  struct tf_set_tcam_entry_parms *parms);
+
+/**
+ * Sends tcam entry 'free' to the Firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] parms
+ *   Pointer to free parameters
+ *
+ * Returns:
+ *  0 on Success else internal Truflow error
+ */
+int tf_msg_tcam_entry_free(struct tf *tfp,
+			   struct tf_free_tcam_entry_parms *parms);
+
 #endif  /* _TF_MSG_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 11/34] net/bnxt: add tf core table scope support
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (9 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
                         ` (25 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Farah Smith, Michael Wildt

From: Farah Smith <farah.smith@broadcom.com>

- Added TruFlow Table public API
- Added Table Scope capability including Table Type support code for
  setting and getting Table Types.

Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile          |   1 +
 drivers/net/bnxt/tf_core/hwrm_tf.h |  21 ++++++
 drivers/net/bnxt/tf_core/tf_core.c |   4 ++
 drivers/net/bnxt/tf_core/tf_core.h | 128 +++++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.c  |  81 +++++++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_msg.h  |  63 ++++++++++++++++++
 drivers/net/bnxt/tf_core/tf_tbl.c  |  43 +++++++++++++
 7 files changed, 341 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 02f8c3f..6714a6a 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -52,6 +52,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
 #
diff --git a/drivers/net/bnxt/tf_core/hwrm_tf.h b/drivers/net/bnxt/tf_core/hwrm_tf.h
index a8a5547..acb9a8b 100644
--- a/drivers/net/bnxt/tf_core/hwrm_tf.h
+++ b/drivers/net/bnxt/tf_core/hwrm_tf.h
@@ -891,6 +891,27 @@ typedef struct tf_session_sram_resc_flush_input {
 } tf_session_sram_resc_flush_input_t, *ptf_session_sram_resc_flush_input_t;
 BUILD_BUG_ON(sizeof(tf_session_sram_resc_flush_input_t) <= TF_MAX_REQ_SIZE);
 
+/* Input params for table type set */
+typedef struct tf_tbl_type_set_input {
+	/* Session Id */
+	uint32_t			 fw_session_id;
+	/* flags */
+	uint16_t			 flags;
+	/* When set to 0, indicates the get apply to RX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_RX			(0x0)
+	/* When set to 1, indicates the get apply to TX */
+#define TF_TBL_TYPE_SET_INPUT_FLAGS_DIR_TX			(0x1)
+	/* Type of the object to set */
+	uint32_t			 type;
+	/* Size of the data to set in bytes */
+	uint16_t			 size;
+	/* Data to set */
+	uint8_t			  data[TF_BULK_SEND];
+	/* Index to set */
+	uint32_t			 index;
+} tf_tbl_type_set_input_t, *ptf_tbl_type_set_input_t;
+BUILD_BUG_ON(sizeof(tf_tbl_type_set_input_t) <= TF_MAX_REQ_SIZE);
+
 /* Input params for table type get */
 typedef struct tf_tbl_type_get_input {
 	/* Session Id */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 39f4a11..f04a9b1 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -7,6 +7,7 @@
 
 #include "tf_core.h"
 #include "tf_session.h"
+#include "tf_tbl.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -173,6 +174,9 @@ tf_open_session(struct tf                    *tfp,
 	/* Setup hash seeds */
 	tf_seeds_init(session);
 
+	/* Initialize external pool data structures */
+	tf_init_tbl_pool(session);
+
 	session->ref_count++;
 
 	/* Return session ID */
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 1431d06..4c90677 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -458,6 +458,134 @@ int tf_free_identifier(struct tf *tfp,
 		       struct tf_free_identifier_parms *parms);
 
 /**
+ * @page dram_table DRAM Table Scope Interface
+ *
+ * @ref tf_alloc_tbl_scope
+ *
+ * @ref tf_free_tbl_scope
+ *
+ * If we allocate the EEM memory from the core, we need to store it in
+ * the shared session data structure to make sure it can be freed later.
+ * (for example if the PF goes away)
+ *
+ * Current thought is that memory is allocated within core.
+ */
+
+
+/** tf_alloc_tbl_scope_parms definition
+ */
+struct tf_alloc_tbl_scope_parms {
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t rx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t rx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 * Use this variable or the number of flows, do not set both.
+	 */
+	uint32_t rx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000. If set, rx_mem_size_in_mb must equal 0.
+	 */
+	uint32_t rx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t rx_tbl_if_id;
+	/**
+	 * [in] All Maximum key size required.
+	 */
+	uint16_t tx_max_key_sz_in_bits;
+	/**
+	 * [in] Maximum Action size required (includes inlined items)
+	 */
+	uint16_t tx_max_action_entry_sz_in_bits;
+	/**
+	 * [in] Memory size in Megabytes
+	 * Total memory size allocated by user to be divided
+	 * up for actions, hash, counters.  Only inline external actions.
+	 */
+	uint32_t tx_mem_size_in_mb;
+	/**
+	 * [in] Number of flows * 1000
+	 */
+	uint32_t tx_num_flows_in_k;
+	/**
+	 * [in] SR2 only receive table access interface id
+	 */
+	uint32_t tx_tbl_if_id;
+	/**
+	 * [out] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+struct tf_free_tbl_scope_parms {
+	/**
+	 * [in] table scope identifier
+	 */
+	uint32_t tbl_scope_id;
+};
+
+/**
+ * allocate a table scope
+ *
+ * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * is a software construct to identify an EEM table.  This function will
+ * divide the hash memory/buckets and records according to the device
+ * device constraints based upon calculations using either the number of flows
+ * requested or the size of memory indicated.  Other parameters passed in
+ * determine the configuration (maximum key size, maximum external action record
+ * size.
+ *
+ * This API will allocate the table region in
+ * DRAM, program the PTU page table entries, and program the number of static
+ * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * 0 in the EM memory for the scope.  Upon successful completion of this API,
+ * hash tables are fully initialized and ready for entries to be inserted.
+ *
+ * A single API is used to allocate a common table scope identifier in both
+ * receive and transmit CFA. The scope identifier is common due to nature of
+ * connection tracking sending notifications between RX and TX direction.
+ *
+ * The receive and transmit table access identifiers specify which rings will
+ * be used to initialize table DRAM.  The application must ensure mutual
+ * exclusivity of ring usage for table scope allocation and any table update
+ * operations.
+ *
+ * The hash table buckets, EM keys, and EM lookup results are stored in the
+ * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
+ * hash table buckets are stored at the beginning of that memory.
+ *
+ * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * internally by TruFlow core.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms);
+
+
+/**
+ * free a table scope
+ *
+ * Firmware checks that the table scope ID is owned by the TruFlow
+ * session, verifies that no references to this table scope remains
+ * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * then frees the table scope ID.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_scope(struct tf *tfp,
+		      struct tf_free_tbl_scope_parms *parms);
+
+/**
  * TCAM table type
  */
 enum tf_tcam_tbl_type {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 9d17440..3f3001c 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,87 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+int
+tf_msg_set_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_set_input req = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.size = tfp_cpu_to_le_16(size);
+	req.index = tfp_cpu_to_le_32(index);
+
+	tfp_memcpy(&req.data,
+		   data,
+		   size);
+
+	MSG_PREP_NO_RESP(parms,
+			 TF_KONG_MB,
+			 HWRM_TF,
+			 HWRM_TFT_TBL_TYPE_SET,
+			 req);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
+int
+tf_msg_get_tbl_entry(struct tf *tfp,
+		     enum tf_dir dir,
+		     enum tf_tbl_type type,
+		     uint16_t size,
+		     uint8_t *data,
+		     uint32_t index)
+{
+	int rc;
+	struct tfp_send_msg_parms parms = { 0 };
+	struct tf_tbl_type_get_input req = { 0 };
+	struct tf_tbl_type_get_output resp = { 0 };
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Populate the request */
+	req.fw_session_id =
+		tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);
+	req.flags = tfp_cpu_to_le_16(dir);
+	req.type = tfp_cpu_to_le_32(type);
+	req.index = tfp_cpu_to_le_32(index);
+
+	MSG_PREP(parms,
+		 TF_KONG_MB,
+		 HWRM_TF,
+		 HWRM_TFT_TBL_TYPE_GET,
+		 req,
+		 resp);
+
+	rc = tfp_send_msg_tunneled(tfp, &parms);
+	if (rc)
+		return rc;
+
+	/* Verify that we got enough buffer to return the requested data */
+	if (resp.size < size)
+		return -EINVAL;
+
+	tfp_memcpy(data,
+		   &resp.data,
+		   resp.size);
+
+	return tfp_le_to_cpu_32(parms.tf_resp_code);
+}
+
 #define TF_BYTES_PER_SLICE(tfp) 12
 #define NUM_SLICES(tfp, bytes) \
 	(((bytes) + TF_BYTES_PER_SLICE(tfp) - 1) / TF_BYTES_PER_SLICE(tfp))
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index fa74d78..9055b16 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -6,6 +6,7 @@
 #ifndef _TF_MSG_H_
 #define _TF_MSG_H_
 
+#include "tf_tbl.h"
 #include "tf_rm.h"
 
 struct tf;
@@ -150,4 +151,66 @@ int tf_msg_tcam_entry_set(struct tf *tfp,
 int tf_msg_tcam_entry_free(struct tf *tfp,
 			   struct tf_free_tcam_entry_parms *parms);
 
+/**
+ * Sends Set message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to set
+ *
+ * [in] type
+ *   Type of the object to set
+ *
+ * [in] size
+ *   Size of the data to set
+ *
+ * [in] data
+ *   Data to set
+ *
+ * [in] index
+ *   Index to set
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_set_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
+/**
+ * Sends get message of a Table Type element to the firmware.
+ *
+ * [in] tfp
+ *   Pointer to session handle
+ *
+ * [in] dir
+ *   Direction location of the element to get
+ *
+ * [in] type
+ *   Type of the object to get
+ *
+ * [in] size
+ *   Size of the data read
+ *
+ * [in] data
+ *   Data read
+ *
+ * [in] index
+ *   Index to get
+ *
+ * Returns:
+ *   0 - Success
+ */
+int tf_msg_get_tbl_entry(struct tf *tfp,
+			 enum tf_dir dir,
+			 enum tf_tbl_type type,
+			 uint16_t size,
+			 uint8_t *data,
+			 uint32_t index);
+
 #endif  /* _TF_MSG_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
new file mode 100644
index 0000000..14bf4ef
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -0,0 +1,43 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+/* Truflow Table APIs and supporting code */
+
+#include <stdio.h>
+#include <string.h>
+#include <stdbool.h>
+#include <math.h>
+#include <sys/param.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include "hsi_struct_def_dpdk.h"
+
+#include "tf_core.h"
+#include "tf_session.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "hwrm_tf.h"
+#include "bnxt.h"
+#include "tf_resources.h"
+#include "tf_rm.h"
+
+#define PTU_PTE_VALID          0x1UL
+#define PTU_PTE_LAST           0x2UL
+#define PTU_PTE_NEXT_TO_LAST   0x4UL
+
+/* Number of pointers per page_size */
+#define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
+
+/* API defined in tf_tbl.h */
+void
+tf_init_tbl_pool(struct tf_session *session)
+{
+	enum tf_dir dir;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		session->ext_pool_2_scope[dir][TF_EXT_POOL_0] =
+			TF_TBL_SCOPE_INVALID;
+	}
+}
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 12/34] net/bnxt: add EM/EEM functionality
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (10 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
                         ` (24 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Pete Spreadborough

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add TruFlow flow memory support
- Exact Match (EM) adds the capability to manage and manipulate
  data flows using on chip memory.
- Extended Exact Match (EEM) behaves similarly to EM, but at a
  vastly increased scale by using host DDR, with performance
  tradeoff due to the need to access off-chip memory.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |    2 +
 drivers/net/bnxt/tf_core/lookup3.h            |  162 +++
 drivers/net/bnxt/tf_core/stack.c              |  107 ++
 drivers/net/bnxt/tf_core/stack.h              |  107 ++
 drivers/net/bnxt/tf_core/tf_core.c            |   50 +
 drivers/net/bnxt/tf_core/tf_core.h            |  480 ++++++-
 drivers/net/bnxt/tf_core/tf_em.c              |  515 +++++++
 drivers/net/bnxt/tf_core/tf_em.h              |  117 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |  166 +++
 drivers/net/bnxt/tf_core/tf_msg.c             |  171 +++
 drivers/net/bnxt/tf_core/tf_msg.h             |   40 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1795 ++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   83 ++
 13 files changed, 3788 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 6714a6a..4c95847 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
new file mode 100644
index 0000000..e5abcc2
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Based on lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ * http://www.burtleburtle.net/bob/c/lookup3.c
+ *
+ * These functions for producing 32-bit hashes for has table lookup.
+ * hashword(), hashlittle(), hashlittle2(), hashbig(), mix(), and final()
+ * are externally useful functions. Routines to test the hash are included
+ * if SELF_TEST is defined. You can use this free for any purpose. It is in
+ * the public domain. It has no warranty.
+ */
+
+#ifndef _LOOKUP3_H_
+#define _LOOKUP3_H_
+
+#define rot(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
+
+/** -------------------------------------------------------------------------
+ * This is reversible, so any information in (a,b,c) before mix() is
+ * still in (a,b,c) after mix().
+ *
+ * If four pairs of (a,b,c) inputs are run through mix(), or through
+ * mix() in reverse, there are at least 32 bits of the output that
+ * are sometimes the same for one pair and different for another pair.
+ * This was tested for:
+ *   pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+ * satisfy this are
+ *     4  6  8 16 19  4
+ *     9 15  3 18 27 15
+ *    14  9  3  7 17  3
+ * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
+ * for "differ" defined as + with a one-bit base and a two-bit delta.  I
+ * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+ * the operations, constants, and arrangements of the variables.
+ *
+ * This does not achieve avalanche.  There are input bits of (a,b,c)
+ * that fail to affect some output bits of (a,b,c), especially of a.  The
+ * most thoroughly mixed value is c, but it doesn't really even achieve
+ * avalanche in c.
+ *
+ * This allows some parallelism.  Read-after-writes are good at doubling
+ * the number of bits affected, so the goal of mixing pulls in the opposite
+ * direction as the goal of parallelism.  I did what I could.  Rotates
+ * seem to cost as much as shifts on every machine I could lay my hands
+ * on, and rotates are much kinder to the top and bottom bits, so I used
+ * rotates.
+ * --------------------------------------------------------------------------
+ */
+#define mix(a, b, c) \
+{ \
+	(a) -= (c); (a) ^= rot((c), 4);  (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 6);  (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 8);  (b) += a; \
+	(a) -= (c); (a) ^= rot((c), 16); (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 19); (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 4);  (b) += a; \
+}
+
+/** --------------------------------------------------------------------------
+ * final -- final mixing of 3 32-bit values (a,b,c) into c
+ *
+ * Pairs of (a,b,c) values differing in only a few bits will usually
+ * produce values of c that look totally different.  This was tested for
+ *  pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * These constants passed:
+ *  14 11 25 16 4 14 24
+ *  12 14 25 16 4 14 24
+ * and these came close:
+ *   4  8 15 26 3 22 24
+ *  10  8 15 26 3 22 24
+ *  11  8 15 26 3 22 24
+ * --------------------------------------------------------------------------
+ */
+#define final(a, b, c) \
+{ \
+	(c) ^= (b); (c) -= rot((b), 14); \
+	(a) ^= (c); (a) -= rot((c), 11); \
+	(b) ^= (a); (b) -= rot((a), 25); \
+	(c) ^= (b); (c) -= rot((b), 16); \
+	(a) ^= (c); (a) -= rot((c), 4);  \
+	(b) ^= (a); (b) -= rot((a), 14); \
+	(c) ^= (b); (c) -= rot((b), 24); \
+}
+
+/** --------------------------------------------------------------------
+ *  This works on all machines.  To be useful, it requires
+ *  -- that the key be an array of uint32_t's, and
+ *  -- that the length be the number of uint32_t's in the key
+
+ *  The function hashword() is identical to hashlittle() on little-endian
+ *  machines, and identical to hashbig() on big-endian machines,
+ *  except that the length has to be measured in uint32_ts rather than in
+ *  bytes. hashlittle() is more complicated than hashword() only because
+ *  hashlittle() has to dance around fitting the key bytes into registers.
+ *
+ *  Input Parameters:
+ *	 key: an array of uint32_t values
+ *	 length: the length of the key, in uint32_ts
+ *	 initval: the previous hash, or an arbitrary value
+ * --------------------------------------------------------------------
+ */
+static inline uint32_t hashword(const uint32_t *k,
+				size_t length,
+				uint32_t initval) {
+	uint32_t a, b, c;
+	int index = 12;
+
+	/* Set up the internal state */
+	a = 0xdeadbeef + (((uint32_t)length) << 2) + initval;
+	b = a;
+	c = a;
+
+	/*-------------------------------------------- handle most of the key */
+	while (length > 3) {
+		a += k[index];
+		b += k[index - 1];
+		c += k[index - 2];
+		mix(a, b, c);
+		length -= 3;
+		index -= 3;
+	}
+
+	/*-------------------------------------- handle the last 3 uint32_t's */
+	switch (length) {	      /* all the case statements fall through */
+	case 3:
+		c += k[index - 2];
+		/* Falls through. */
+	case 2:
+		b += k[index - 1];
+		/* Falls through. */
+	case 1:
+		a += k[index];
+		final(a, b, c);
+		/* Falls through. */
+	case 0:	    /* case 0: nothing left to add */
+		/* FALLTHROUGH */
+		break;
+	}
+	/*------------------------------------------------- report the result */
+	return c;
+}
+
+#endif /* _LOOKUP3_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
new file mode 100644
index 0000000..3337073
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <errno.h>
+#include "stack.h"
+
+#define STACK_EMPTY -1
+
+/* Initialize stack
+ */
+int
+stack_init(int num_entries, uint32_t *items, struct stack *st)
+{
+	if (items == NULL || st == NULL)
+		return -EINVAL;
+
+	st->max = num_entries;
+	st->top = STACK_EMPTY;
+	st->items = items;
+
+	return 0;
+}
+
+/* Return the size of the stack
+ */
+int32_t
+stack_size(struct stack *st)
+{
+	return st->top + 1;
+}
+
+/* Check if the stack is empty
+ */
+bool
+stack_is_empty(struct stack *st)
+{
+	return st->top == STACK_EMPTY;
+}
+
+/* Check if the stack is full
+ */
+bool
+stack_is_full(struct stack *st)
+{
+	return st->top == st->max - 1;
+}
+
+/* Add  element x to  the stack
+ */
+int
+stack_push(struct stack *st, uint32_t x)
+{
+	if (stack_is_full(st))
+		return -EOVERFLOW;
+
+	/* add an element and increments the top index
+	 */
+	st->items[++st->top] = x;
+
+	return 0;
+}
+
+/* Pop top element x from the stack and return
+ * in user provided location.
+ */
+int
+stack_pop(struct stack *st, uint32_t *x)
+{
+	if (stack_is_empty(st))
+		return -ENODATA;
+
+	*x = st->items[st->top];
+	st->top--;
+
+	return 0;
+}
+
+/* Dump the stack
+ */
+void stack_dump(struct stack *st)
+{
+	int i, j;
+
+	printf("top=%d\n", st->top);
+	printf("max=%d\n", st->max);
+
+	if (st->top == -1) {
+		printf("stack is empty\n");
+		return;
+	}
+
+	for (i = 0; i < st->max + 7 / 8; i++) {
+		printf("item[%d] 0x%08x", i, st->items[i]);
+
+		for (j = 0; j < 7; j++) {
+			if (i++ < st->max - 1)
+				printf(" 0x%08x", st->items[i]);
+		}
+		printf("\n");
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
new file mode 100644
index 0000000..6fe8829
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _STACK_H_
+#define _STACK_H_
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/** Stack data structure
+ */
+struct stack {
+	int max;         /**< Maximum number of entries */
+	int top;         /**< maximum value in stack */
+	uint32_t *items; /**< items in the stack */
+};
+
+/** Initialize stack of uint32_t elements
+ *
+ *  [in] num_entries
+ *    maximum number of elemnts in the stack
+ *
+ *  [in] items
+ *    pointer to items (must be sized to (uint32_t * num_entries)
+ *
+ *  s[in] st
+ *    pointer to the stack structure
+ *
+ *  return
+ *    0 for success
+ */
+int stack_init(int num_entries,
+	       uint32_t *items,
+	       struct stack *st);
+
+/** Return the size of the stack
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    number of elements
+ */
+int32_t stack_size(struct stack *st);
+
+/** Check if the stack is empty
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_empty(struct stack *st);
+
+/** Check if the stack is full
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_full(struct stack *st);
+
+/** Add  element x to  the stack
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in] x
+ *   value to push on the stack
+ * return
+ *  0 for success
+ */
+int stack_push(struct stack *st, uint32_t x);
+
+/** Pop top element x from the stack and return
+ * in user provided location.
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in, out] x
+ *  pointer to where the value popped will be written
+ *
+ * return
+ *  0 for success
+ */
+int stack_pop(struct stack *st, uint32_t *x);
+
+/** Dump stack information
+ *
+ * Warning: Don't use for large stacks due to prints
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *    none
+ */
+void stack_dump(struct stack *st);
+
+#endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index f04a9b1..fc7d638 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -8,6 +8,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
+#include "tf_em.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -289,6 +290,55 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb =
+		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
+				  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Process the EM entry per Table Scope type */
+	return tf_insert_eem_entry((struct tf_session *)tfp->session->core_data,
+				   tbl_scope_cb,
+				   parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb =
+		tbl_scope_cb_find((struct tf_session *)tfp->session->core_data,
+				  parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	return tf_delete_eem_entry(tfp, parms);
+}
+
 /** allocate identifier resource
  *
  * Returns success or failure code.
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 4c90677..34e643c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -21,6 +21,10 @@
 
 /********** BEGIN Truflow Core DEFINITIONS **********/
 
+
+#define TF_KILOBYTE  1024
+#define TF_MEGABYTE  (1024 * 1024)
+
 /**
  * direction
  */
@@ -31,6 +35,27 @@ enum tf_dir {
 };
 
 /**
+ * memory choice
+ */
+enum tf_mem {
+	TF_MEM_INTERNAL, /**< Internal */
+	TF_MEM_EXTERNAL, /**< External */
+	TF_MEM_MAX
+};
+
+/**
+ * The size of the external action record (Wh+/Brd2)
+ *
+ * Currently set to 512.
+ *
+ * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
+ * + stats (16) = 304 aligned on a 16B boundary
+ *
+ * Theoretically, the size should be smaller. ~304B
+ */
+#define TF_ACTION_RECORD_SZ 512
+
+/**
  * External pool size
  *
  * Defines a single pool of external action records of
@@ -56,6 +81,23 @@ enum tf_dir {
 #define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
 #define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
 
+/** EEM record AR helper
+ *
+ * Helpers to handle the Action Record Pointer in the EEM Record Entry.
+ *
+ * Convert absolute offset to action record pointer in EEM record entry
+ * Convert action record pointer in EEM record entry to absolute offset
+ */
+#define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
+#define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
+
+#define TF_ACT_REC_INDEX_2_OFFSET(idx) ((idx) << 9)
+
+/*
+ * Helper Macros
+ */
+#define TF_BITS_2_BYTES(num_bits) (((num_bits) + 7) / 8)
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
@@ -495,7 +537,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -517,7 +559,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -536,7 +578,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -546,7 +588,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -563,7 +605,7 @@ struct tf_free_tbl_scope_parms {
  * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
  * hash table buckets are stored at the beginning of that memory.
  *
- * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * NOTE:  No EM internal setup is done here. On chip EM records are managed
  * internally by TruFlow core.
  *
  * Returns success or failure code.
@@ -577,7 +619,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -905,4 +947,430 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_EXT_0,
 	TF_TBL_TYPE_MAX
 };
+
+/** tf_alloc_tbl_entry parameter definition
+ */
+struct tf_alloc_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/** allocate index table entries
+ *
+ * Internal types:
+ *
+ * Allocate an on chip index table entry or search for a matching
+ * entry of the indicated type for this TruFlow session.
+ *
+ * Allocates an index table record. This function will attempt to
+ * allocate an entry or search an index table for a matching entry if
+ * search is enabled (only the shadow copy of the table is accessed).
+ *
+ * If search is not enabled, the first available free entry is
+ * returned. If search is enabled and a matching entry to entry_data
+ * is found hit is set to TRUE and success is returned.
+ *
+ * External types:
+ *
+ * These are used to allocate inlined action record memory.
+ *
+ * Allocates an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_entry(struct tf *tfp,
+		       struct tf_alloc_tbl_entry_parms *parms);
+
+/** tf_free_tbl_entry parameter definition
+ */
+struct tf_free_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free index table entry
+ *
+ * Used to free a previously allocated table entry.
+ *
+ * Internal types:
+ *
+ * If session has shadow_copy enabled the shadow DB is searched and if
+ * found the element ref_cnt is decremented. If ref_cnt goes to
+ * zero then the element is returned to the session pool.
+ *
+ * If the session does not have a shadow DB the element is free'ed and
+ * given back to the session pool.
+ *
+ * External types:
+ *
+ * Free's an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_entry(struct tf *tfp,
+		      struct tf_free_tbl_entry_parms *parms);
+
+/** tf_set_tbl_entry parameter definition
+ */
+struct tf_set_tbl_entry_parms {
+	/**
+	 * [in] Table scope identifier
+	 *
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/** set index table entry
+ *
+ * Used to insert an application programmed index table entry into a
+ * previous allocated table location.  A shadow copy of the table
+ * is maintained (if enabled) (only for internal objects)
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tbl_entry(struct tf *tfp,
+		     struct tf_set_tbl_entry_parms *parms);
+
+/** tf_get_tbl_entry parameter definition
+ */
+struct tf_get_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/** get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_tbl_entry(struct tf *tfp,
+		     struct tf_get_tbl_entry_parms *parms);
+
+/**
+ * @page exact_match Exact Match Table
+ *
+ * @ref tf_insert_em_entry
+ *
+ * @ref tf_delete_em_entry
+ *
+ * @ref tf_search_em_entry
+ *
+ */
+/** tf_insert_em_entry parameter definition
+ */
+struct tf_insert_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] ptr to structure containing result field
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] duplicate check flag
+	 */
+	uint8_t	dup_check;
+	/**
+	 * [out] Flow handle value for the inserted entry.  This is encoded
+	 * as the entries[4]:bucket[2]:hashId[1]:hash[14]
+	 */
+	uint64_t flow_handle;
+	/**
+	 * [out] Flow id is returned as null (internal)
+	 * Flow id is the GFID value for the inserted entry (external)
+	 * This is the value written to the BD and useful information for mark.
+	 */
+	uint64_t flow_id;
+};
+/**
+ * tf_delete_em_entry parameter definition
+ */
+struct tf_delete_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] epoch group IDs of entry to delete
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] structure containing flow delete handle information
+	 */
+	uint64_t flow_handle;
+};
+/**
+ * tf_search_em_entry parameter definition
+ */
+struct tf_search_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in/out] ptr to structure containing EM record fields
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] epoch group IDs of entry to lookup
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] ptr to structure containing flow delete handle
+	 */
+	uint64_t flow_handle;
+};
+
+/** insert em hash entry in internal table memory
+ *
+ * Internal:
+ *
+ * This API inserts an exact match entry into internal EM table memory
+ * of the specified direction.
+ *
+ * Note: The EM record is managed within the TruFlow core and not the
+ * application.
+ *
+ * Shadow copy of internal record table an association with hash and 1,2, or 4
+ * associated buckets
+ *
+ * External:
+ * This API inserts an exact match entry into DRAM EM table memory of the
+ * specified direction and table scope.
+ *
+ * When inserting an entry into an exact match table, the TruFlow library may
+ * need to allocate a dynamic bucket for the entry (Brd4 only).
+ *
+ * The insertion of duplicate entries in an EM table is not permitted.	If a
+ * TruFlow application can guarantee that it will never insert duplicates, it
+ * can disable duplicate checking by passing a zero value in the  dup_check
+ * parameter to this API.  This will optimize performance. Otherwise, the
+ * TruFlow library will enforce protection against inserting duplicate entries.
+ *
+ * Flow handle is defined in this document:
+ *
+ * https://docs.google.com
+ * /document/d/1NESu7RpTN3jwxbokaPfYORQyChYRmJgs40wMIRe8_-Q/edit
+ *
+ * Returns success or busy code.
+ *
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
+
+/** delete em hash entry table memory
+ *
+ * Internal:
+ *
+ * This API deletes an exact match entry from internal EM table memory of the
+ * specified direction. If a valid flow ptr is passed in then that takes
+ * precedence over the pointer to the complete key passed in.
+ *
+ *
+ * External:
+ *
+ * This API deletes an exact match entry from EM table memory of the specified
+ * direction and table scope. If a valid flow handle is passed in then that
+ * takes precedence over the pointer to the complete key passed in.
+ *
+ * The TruFlow library may release a dynamic bucket when an entry is deleted.
+ *
+ *
+ * Returns success or not found code
+ *
+ *
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
+
+/** search em hash entry table memory
+ *
+ * Internal:
+
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow (flow takes precedence) and direction.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * If flow_handle is set, search shadow copy.
+ *
+ * Otherwise, query the fw with key to get result.
+ *
+ * External:
+ *
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow_handle (flow takes precedence), direction and table scope.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * Returns success or not found code
+ *
+ */
+int tf_search_em_entry(struct tf *tfp,
+		       struct tf_search_em_entry_parms *parms);
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
new file mode 100644
index 0000000..bd8e2ba
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -0,0 +1,515 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/* Enable EEM table dump
+ */
+#define TF_EEM_DUMP
+
+static struct tf_eem_64b_entry zero_key_entry;
+
+static uint32_t tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & 0x7FFF)
+		return 0;
+
+	if (num_entries > (128 * 1024 * 1024))
+		return 0;
+
+	return mask;
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+	for (l = (len - 1); l >= 0; l--)
+		crc = ucrc32(buf[l], crc);
+
+	return ~crc;
+}
+
+static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
+					  uint8_t *key,
+					  enum tf_dir dir)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = session->lkup_em_seed_mem[dir][index * 2];
+	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = crc32i(~val1, temp, 4);
+
+	val1 = crc32i(~val1,
+		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
+		      TF_HW_EM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
+					    uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((uint32_t *)in_key) + 1,
+			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 lookup3_init_value);
+
+	return val1;
+}
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	void *addr = NULL;
+	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
+
+	if (ctx == NULL)
+		return NULL;
+
+	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
+		return NULL;
+
+	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+		return NULL;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = ctx->em_tables[table_type].num_lvl - 1;
+
+	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Read Key table entry
+ *
+ * Entry is read in to entry
+ */
+static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
+	return 0;
+}
+
+static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
+
+	return 0;
+}
+
+static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_eem_64b_entry *entry,
+			       uint32_t index,
+			       enum tf_em_table_type table_type,
+			       enum tf_dir dir)
+{
+	int rc;
+	struct tf_eem_64b_entry table_entry;
+
+	rc = tf_em_read_entry(tbl_scope_cb,
+			      &table_entry,
+			      TF_EM_KEY_RECORD_SIZE,
+			      index,
+			      table_type,
+			      dir);
+
+	if (rc != 0)
+		return -EINVAL;
+
+	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
+		if (entry != NULL) {
+			if (memcmp(&table_entry,
+				   entry,
+				   TF_EM_KEY_RECORD_SIZE) == 0)
+				return -EEXIST;
+		} else {
+			return -EEXIST;
+		}
+
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
+				    uint8_t	       *in_key,
+				    struct tf_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
+
+/* tf_em_select_inject_table
+ *
+ * Returns:
+ * 0 - Key does not exist in either table and can be inserted
+ *		at "index" in table "table".
+ * EEXIST  - Key does exist in table at "index" in table "table".
+ * TF_ERR     - Something went horribly wrong.
+ */
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+					  enum tf_dir dir,
+					  struct tf_eem_64b_entry *entry,
+					  uint32_t key0_hash,
+					  uint32_t key1_hash,
+					  uint32_t *index,
+					  enum tf_em_table_type *table)
+{
+	int key0_entry;
+	int key1_entry;
+
+	/*
+	 * Check KEY0 table.
+	 */
+	key0_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key0_hash,
+					 KEY0_TABLE,
+					 dir);
+
+	/*
+	 * Check KEY1 table.
+	 */
+	key1_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key1_hash,
+					 KEY1_TABLE,
+					 dir);
+
+	if (key0_entry == -EEXIST) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return -EEXIST;
+	} else if (key1_entry == -EEXIST) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return -EEXIST;
+	} else if (key0_entry == 0) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return 0;
+	} else if (key1_entry == 0) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+int tf_insert_eem_entry(struct tf_session	   *session,
+			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t	   mask;
+	uint32_t	   key0_hash;
+	uint32_t	   key1_hash;
+	uint32_t	   key0_index;
+	uint32_t	   key1_index;
+	struct tf_eem_64b_entry key_entry;
+	uint32_t	   index;
+	enum tf_em_table_type table_type;
+	uint32_t	   gfid;
+	int		   num_of_entry;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+
+	key0_hash = tf_em_lkup_get_crc32_hash(session,
+				      &parms->key[num_of_entry] - 1,
+				      parms->dir);
+	key0_index = key0_hash & mask;
+
+	key1_hash =
+	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
+					parms->key);
+	key1_index = key1_hash & mask;
+
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Find which table to use
+	 */
+	if (tf_em_select_inject_table(tbl_scope_cb,
+				      parms->dir,
+				      &key_entry,
+				      key0_index,
+				      key1_index,
+				      &index,
+				      &table_type) == 0) {
+		if (table_type == KEY0_TABLE) {
+			TF_SET_GFID(gfid,
+				    key0_index,
+				    KEY0_TABLE);
+		} else {
+			TF_SET_GFID(gfid,
+				    key1_index,
+				    KEY1_TABLE);
+		}
+
+		/*
+		 * Inject
+		 */
+		if (tf_em_write_entry(tbl_scope_cb,
+				      &key_entry,
+				      TF_EM_KEY_RECORD_SIZE,
+				      index,
+				      table_type,
+				      parms->dir) == 0) {
+			TF_SET_FLOW_ID(parms->flow_id,
+				       gfid,
+				       TF_GFID_TABLE_EXTERNAL,
+				       parms->dir);
+			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+						     0,
+						     0,
+						     0,
+						     index,
+						     0,
+						     table_type);
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_session	   *session;
+	struct tf_tbl_scope_cb	   *tbl_scope_cb;
+	enum tf_em_table_type hash_type;
+	uint32_t index;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	if (session == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	if (tf_em_entry_exists(tbl_scope_cb,
+			       NULL,
+			       index,
+			       hash_type,
+			       parms->dir) == -EEXIST) {
+		tf_em_write_entry(tbl_scope_cb,
+				  &zero_key_entry,
+				  TF_EM_KEY_RECORD_SIZE,
+				  index,
+				  hash_type,
+				  parms->dir);
+
+		return 0;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
new file mode 100644
index 0000000..8a3584f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_H_
+#define _TF_EM_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+#define TF_HW_EM_KEY_MAX_SIZE 52
+#define TF_EM_KEY_RECORD_SIZE 64
+
+/** EEM Entry header
+ *
+ */
+struct tf_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define TF_LKUP_RECORD_VALID_SHIFT 31
+#define TF_LKUP_RECORD_VALID_MASK 0x80000000
+#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
+#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
+#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
+#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
+#define TF_LKUP_RECORD_RESERVED_SHIFT 17
+#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
+#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
+#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
+#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
+#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
+#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
+#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
+#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
+#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/** EEM Entry
+ *  Each EEM entry is 512-bit (64-bytes)
+ */
+struct tf_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+};
+
+/**
+ * Allocates EEM Table scope
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ *   -ENOMEM - Out of memory
+ */
+int tf_alloc_eem_tbl_scope(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free's EEM Table scope control block
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			     struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *  table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms);
+
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms);
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type);
+
+#endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
new file mode 100644
index 0000000..417a99c
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EXT_FLOW_HANDLE_H_
+#define _TF_EXT_FLOW_HANDLE_H_
+
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK	0x00000000F0000000ULL
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT	28
+#define TF_FLOW_TYPE_FLOW_HANDLE_MASK		0x00000000000000F0ULL
+#define TF_FLOW_TYPE_FLOW_HANDLE_SFT		4
+#define TF_FLAGS_FLOW_HANDLE_MASK		0x000000000000000FULL
+#define TF_FLAGS_FLOW_HANDLE_SFT		0
+#define TF_INDEX_FLOW_HANDLE_MASK		0xFFFFFFF000000000ULL
+#define TF_INDEX_FLOW_HANDLE_SFT		36
+#define TF_ENTRY_NUM_FLOW_HANDLE_MASK		0x0000000E00000000ULL
+#define TF_ENTRY_NUM_FLOW_HANDLE_SFT		33
+#define TF_HASH_TYPE_FLOW_HANDLE_MASK		0x0000000100000000ULL
+#define TF_HASH_TYPE_FLOW_HANDLE_SFT		32
+
+#define TF_FLOW_HANDLE_MASK (TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK |	\
+				TF_FLOW_TYPE_FLOW_HANDLE_MASK |		\
+				TF_FLAGS_FLOW_HANDLE_MASK |		\
+				TF_INDEX_FLOW_HANDLE_MASK |		\
+				TF_ENTRY_NUM_FLOW_HANDLE_MASK |		\
+				TF_HASH_TYPE_FLOW_HANDLE_MASK)
+
+#define TF_GET_FIELDS_FROM_FLOW_HANDLE(flow_handle,			\
+				       num_key_entries,			\
+				       flow_type,			\
+				       flags,				\
+				       index,				\
+				       entry_num,			\
+				       hash_type)			\
+do {									\
+	(num_key_entries) = \
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT);			\
+	(flow_type) = (((flow_handle) & TF_FLOW_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_FLOW_TYPE_FLOW_HANDLE_SFT);			\
+	(flags) = (((flow_handle) & TF_FLAGS_FLOW_HANDLE_MASK) >>	\
+		     TF_FLAGS_FLOW_HANDLE_SFT);				\
+	(index) = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>	\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+	(entry_num) = (((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT);			\
+	(hash_type) = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+#define TF_SET_FIELDS_IN_FLOW_HANDLE(flow_handle,			\
+				     num_key_entries,			\
+				     flow_type,				\
+				     flags,				\
+				     index,				\
+				     entry_num,				\
+				     hash_type)				\
+do {									\
+	(flow_handle) &= ~TF_FLOW_HANDLE_MASK;				\
+	(flow_handle) |= \
+		(((num_key_entries) << TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT) & \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flow_type) << TF_FLOW_TYPE_FLOW_HANDLE_SFT) & \
+			TF_FLOW_TYPE_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flags) << TF_FLAGS_FLOW_HANDLE_SFT) &	\
+			TF_FLAGS_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= ((((uint64_t)index) << TF_INDEX_FLOW_HANDLE_SFT) & \
+			TF_INDEX_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)entry_num) << TF_ENTRY_NUM_FLOW_HANDLE_SFT) & \
+		 TF_ENTRY_NUM_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)hash_type) << TF_HASH_TYPE_FLOW_HANDLE_SFT) & \
+		 TF_HASH_TYPE_FLOW_HANDLE_MASK);			\
+} while (0)
+#define TF_SET_FIELDS_IN_WH_FLOW_HANDLE TF_SET_FIELDS_IN_FLOW_HANDLE
+
+#define TF_GET_INDEX_FROM_FLOW_HANDLE(flow_handle,			\
+				      index)				\
+do {									\
+	index = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>		\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(flow_handle,			\
+					  hash_type)			\
+do {									\
+	hash_type = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >>	\
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+/*
+ * 32 bit Flow ID handlers
+ */
+#define TF_GFID_FLOW_ID_MASK		0xFFFFFFF0UL
+#define TF_GFID_FLOW_ID_SFT		4
+#define TF_FLAG_FLOW_ID_MASK		0x00000002UL
+#define TF_FLAG_FLOW_ID_SFT		1
+#define TF_DIR_FLOW_ID_MASK		0x00000001UL
+#define TF_DIR_FLOW_ID_SFT		0
+
+#define TF_SET_FLOW_ID(flow_id, gfid, flag, dir)			\
+do {									\
+	(flow_id) &= ~(TF_GFID_FLOW_ID_MASK |				\
+		     TF_FLAG_FLOW_ID_MASK |				\
+		     TF_DIR_FLOW_ID_MASK);				\
+	(flow_id) |= (((gfid) << TF_GFID_FLOW_ID_SFT) &			\
+		    TF_GFID_FLOW_ID_MASK) |				\
+		(((flag) << TF_FLAG_FLOW_ID_SFT) &			\
+		 TF_FLAG_FLOW_ID_MASK) |				\
+		(((dir) << TF_DIR_FLOW_ID_SFT) &			\
+		 TF_DIR_FLOW_ID_MASK);					\
+} while (0)
+
+#define TF_GET_GFID_FROM_FLOW_ID(flow_id, gfid)				\
+do {									\
+	gfid = (((flow_id) & TF_GFID_FLOW_ID_MASK) >>			\
+		TF_GFID_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_DIR_FROM_FLOW_ID(flow_id, dir)				\
+do {									\
+	dir = (((flow_id) & TF_DIR_FLOW_ID_MASK) >>			\
+		TF_DIR_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_FLAG_FROM_FLOW_ID(flow_id, flag)				\
+do {									\
+	flag = (((flow_id) & TF_FLAG_FLOW_ID_MASK) >>			\
+		TF_FLAG_FLOW_ID_SFT);					\
+} while (0)
+
+/*
+ * 32 bit GFID handlers
+ */
+#define TF_HASH_INDEX_GFID_MASK	0x07FFFFFFUL
+#define TF_HASH_INDEX_GFID_SFT	0
+#define TF_HASH_TYPE_GFID_MASK	0x08000000UL
+#define TF_HASH_TYPE_GFID_SFT	27
+
+#define TF_GFID_TABLE_INTERNAL 0
+#define TF_GFID_TABLE_EXTERNAL 1
+
+#define TF_SET_GFID(gfid, index, type)					\
+do {									\
+	gfid = (((index) << TF_HASH_INDEX_GFID_SFT) &			\
+		TF_HASH_INDEX_GFID_MASK) |				\
+		(((type) << TF_HASH_TYPE_GFID_SFT) &			\
+		 TF_HASH_TYPE_GFID_MASK);				\
+} while (0)
+
+#define TF_GET_HASH_INDEX_FROM_GFID(gfid, index)			\
+do {									\
+	index = (((gfid) & TF_HASH_INDEX_GFID_MASK) >>			\
+		TF_HASH_INDEX_GFID_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_GFID(gfid, type)				\
+do {									\
+	type = (((gfid) & TF_HASH_TYPE_GFID_MASK) >>			\
+		TF_HASH_TYPE_GFID_SFT);					\
+} while (0)
+
+
+#endif /* _TF_EXT_FLOW_HANDLE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 3f3001c..bdf8f15 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,177 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*ctx_id = tfp_le_to_cpu_16(resp.ctx_id);
+
+	return rc;
+}
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t  *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
+	struct hwrm_tf_ctxt_mem_unrgtr_output resp = {0};
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.ctx_id = tfp_cpu_to_le_32(*ctx_id);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_UNRGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps)
+{
+	int rc;
+	struct hwrm_tf_ext_em_qcaps_input  req = {0};
+	struct hwrm_tf_ext_em_qcaps_output resp = { 0 };
+	uint32_t             flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+
+	parms.tf_type = HWRM_TF_EXT_EM_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_caps->supported = tfp_le_to_cpu_32(resp.supported);
+	em_caps->max_entries_supported =
+		tfp_le_to_cpu_32(resp.max_entries_supported);
+	em_caps->key_entry_size = tfp_le_to_cpu_16(resp.key_entry_size);
+	em_caps->record_entry_size =
+		tfp_le_to_cpu_16(resp.record_entry_size);
+	em_caps->efc_entry_size = tfp_le_to_cpu_16(resp.efc_entry_size);
+
+	return rc;
+}
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t   num_entries,
+		  uint16_t   key0_ctx_id,
+		  uint16_t   key1_ctx_id,
+		  uint16_t   record_ctx_id,
+		  uint16_t   efc_ctx_id,
+		  int        dir)
+{
+	int rc;
+	struct hwrm_tf_ext_em_cfg_input  req = {0};
+	struct hwrm_tf_ext_em_cfg_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	flags |= HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD;
+
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
+
+	req.key0_ctx_id = tfp_cpu_to_le_16(key0_ctx_id);
+	req.key1_ctx_id = tfp_cpu_to_le_16(key1_ctx_id);
+	req.record_ctx_id = tfp_cpu_to_le_16(record_ctx_id);
+	req.efc_ctx_id = tfp_cpu_to_le_16(efc_ctx_id);
+
+	parms.tf_type = HWRM_TF_EXT_EM_CFG;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op)
+{
+	int rc;
+	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
+
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 9055b16..b8d8c1e 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -122,6 +122,46 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id);
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t     *ctx_id);
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps);
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t      num_entries,
+		  uint16_t      key0_ctx_id,
+		  uint16_t      key1_ctx_id,
+		  uint16_t      record_ctx_id,
+		  uint16_t      efc_ctx_id,
+		  int           dir);
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op);
+
+/**
  * Sends tcam entry 'set' to the Firmware.
  *
  * [in] tfp
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 14bf4ef..cc27b9c 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,7 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
-#include "tf_session.h"
+#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -30,6 +30,1366 @@
 /* Number of pointers per page_size */
 #define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define	TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			PMD_DRV_LOG(WARNING,
+				    "No map for page %d table %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)(uintptr_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		PMD_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uint64_t)(uintptr_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			PMD_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d\n",
+				i);
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			PMD_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(uint32_t *)tp->pg_va_tbl[j],
+				(uint32_t *)(uintptr_t)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct tf_em_page_tbl *tp,
+		      struct tf_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == PT_LVL_0) {
+		page_cnt[PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == PT_LVL_1) {
+		page_cnt[PT_LVL_1] = num_data_pages;
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else if (max_lvl == PT_LVL_2) {
+		page_cnt[PT_LVL_2] = num_data_pages;
+		page_cnt[PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct tf_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		PMD_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type,
+			    (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	PMD_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[PT_LVL_0],
+		    page_cnt[PT_LVL_1],
+		    page_cnt[PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries =
+		0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size =
+		parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries =
+		0;
+
+	return 0;
+}
+
+/**
+ * Internal function to set a Table Entry. Supports all internal Table Types
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_set_tbl_entry_internal(struct tf *tfp,
+			  struct tf_set_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Set failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Get failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+#if (TF_SHADOW == 1)
+/**
+ * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is incremente and
+ * returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count incremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
+			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Alloc with search not supported\n",
+		    parms->dir);
+
+
+	return -EOPNOTSUPP;
+}
+
+/**
+ * Free Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is decremente and
+ * new ref_count returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_free_tbl_entry_shadow(struct tf_session *tfs,
+			 struct tf_free_tbl_entry_parms *parms)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Free with search not supported\n",
+		    parms->dir);
+
+	return -EOPNOTSUPP;
+}
+#endif /* TF_SHADOW */
+
+/**
+ * Create External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ * [in] tbl_scope_id
+ *   id of the table scope
+ * [in] num_entries
+ *   number of entries to write
+ * [in] entry_sz_bytes
+ *   size of each entry
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t table_scope_id,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_pool[dir][TF_EXT_POOL_0];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
+			    dir, strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0] =
+		(uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries * entry_sz_bytes - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+				    dir, strerror(-rc));
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = table_scope_id;
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ *
+ */
+static void
+tf_destroy_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_pool_mem =
+		tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0];
+
+	tfp_free(ext_pool_mem);
+
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = TF_TBL_SCOPE_INVALID;
+}
+
+/**
+ * Allocate External Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_external(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_session *tfs;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope not allocated\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d\n",
+		   parms->dir,
+		   parms->type);
+		return rc;
+	}
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Allocate Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	int free_cnt;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	id = ba_alloc(session_pool);
+	if (id == -1) {
+		free_cnt = ba_free_count(session_pool);
+
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d, free:%d\n",
+		   parms->dir,
+		   parms->type,
+		   free_cnt);
+		return -ENOMEM;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_ADD_BASE,
+			    id,
+			    &index);
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the session pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+static int
+tf_free_tbl_entry_pool_external(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	uint32_t index;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope error\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
+/**
+ * Free Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_free_tbl_entry_pool_internal(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	int id;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	uint32_t index;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Check if element was indeed allocated */
+	id = ba_inuse_free(session_pool, index);
+	if (id == -1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -ENOMEM;
+	}
+
+	return rc;
+}
+
 /* API defined in tf_tbl.h */
 void
 tf_init_tbl_pool(struct tf_session *session)
@@ -41,3 +1401,436 @@ tf_init_tbl_pool(struct tf_session *session)
 			TF_TBL_SCOPE_INVALID;
 	}
 }
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+
+	/* Check that id is valid */
+	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
+	if (i < 0)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			 struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(session,
+					     dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_em.h */
+int
+tf_alloc_eem_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_table *em_tables;
+	int index;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+
+	/* check parameters */
+	if (parms == NULL || tfp->session == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	index = ba_alloc(session->tbl_scope_pool_rx);
+	if (index == -1) {
+		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+			    "Control Block\n");
+		return -ENOMEM;
+	}
+
+	tbl_scope_cb = &session->tbl_scopes[index];
+	tbl_scope_cb->index = index;
+	tbl_scope_cb->tbl_scope_id = index;
+	parms->tbl_scope_id = index;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"EEM: Unable to query for EEM capability\n");
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx\n");
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[KEY0_TABLE].num_entries,
+				   em_tables[KEY0_TABLE].ctx_id,
+				   em_tables[KEY1_TABLE].ctx_id,
+				   em_tables[RECORD_TABLE].ctx_id,
+				   em_tables[EFC_TABLE].ctx_id,
+				   dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"TBL: Unable to configure EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(session,
+						 dir,
+						 tbl_scope_cb,
+						 index,
+						 TF_EXT_POOL_ENTRY_CNT,
+						 TF_EXT_POOL_ENTRY_SZ_BYTES);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "%d TBL: Unable to allocate idx pools %s\n",
+				    dir,
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = index;
+	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+	return -EINVAL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	if (tfp == NULL || parms == NULL || parms->data == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		void *base_addr;
+		uint32_t offset = TF_ACT_REC_INDEX_2_OFFSET(parms->idx);
+		uint32_t tbl_scope_id;
+
+		session = (struct tf_session *)(tfp->session->core_data);
+
+		tbl_scope_id =
+			session->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+
+		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Table scope not allocated\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		/* Get the table scope control block associated with the
+		 * external pool
+		 */
+
+		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
+
+		if (tbl_scope_cb == NULL)
+			return -EINVAL;
+
+		/* External table, implicitly the Action table */
+		base_addr = tf_em_get_table_page(tbl_scope_cb,
+						 parms->dir,
+						 offset,
+						 RECORD_TABLE);
+		if (base_addr == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Base address lookup failed\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		offset %= TF_EM_PAGE_SIZE;
+		rte_memcpy((char *)base_addr + offset,
+			   parms->data,
+			   parms->data_sz_in_bytes);
+	} else {
+		/* Internal table type processing */
+		rc = tf_set_tbl_entry_internal(tfp, parms);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Set failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+		}
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, External table type not supported\n",
+			    parms->dir);
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_tbl_entry_internal(tfp, parms);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Get failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_eem_tbl_scope(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	/* free table scope and all associated resources */
+	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow copy support for external tables, allocate and return
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
+		/* Entry found and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow of external tables so just free the entry
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_free_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_free_tbl_entry_shadow(tfs, parms);
+		/* Entry free'ed and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
+
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 5a5e72f..cb7ce9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,6 +7,7 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+#include "stack.h"
 
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
@@ -15,6 +16,48 @@ enum tf_pg_tbl_lvl {
 	PT_LVL_MAX
 };
 
+enum tf_em_table_type {
+	KEY0_TABLE,
+	KEY1_TABLE,
+	RECORD_TABLE,
+	EFC_TABLE,
+	MAX_TABLE
+};
+
+struct tf_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct tf_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+};
+
+struct tf_em_ctx_mem_info {
+	struct tf_em_table		em_tables[MAX_TABLE];
+};
+
+/** table scope control block content */
+struct tf_em_caps {
+	uint32_t flags;
+	uint32_t supported;
+	uint32_t max_entries_supported;
+	uint16_t key_entry_size;
+	uint16_t record_entry_size;
+	uint16_t efc_entry_size;
+};
+
 /** Invalid table scope id */
 #define TF_TBL_SCOPE_INVALID 0xffffffff
 
@@ -27,9 +70,49 @@ enum tf_pg_tbl_lvl {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
+	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps          em_caps[TF_DIR_MAX];
+	struct stack               ext_pool[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
+ */
+#define PAGE_SHIFT 22 /** 2M */
+
+#if (PAGE_SHIFT < 12)				/** < 4K >> 4K */
+#define TF_EM_PAGE_SHIFT 12
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SHIFT <= 13)			/** 4K, 8K */
+#define TF_EM_PAGE_SHIFT 13
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
+#define TF_EM_PAGE_SHIFT 15
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
+#elif (PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
+#define TF_EM_PAGE_SHIFT 16
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
+#define TF_EM_PAGE_SHIFT 18
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SHIFT <= 21)			/** 1M */
+#define TF_EM_PAGE_SHIFT 20
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SHIFT <= 22)			/** 2M, 4M */
+#define TF_EM_PAGE_SHIFT 21
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
+#define TF_EM_PAGE_SHIFT 22
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#else						/** >= 1G >> 1G */
+#define TF_EM_PAGE_SHIFT	30
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /**
  * Initialize table pool structure to indicate
  * no table scope has been associated with the
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 13/34] net/bnxt: fetch SVIF information from the firmware
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (11 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
                         ` (23 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

SVIF (source virtual interface) is used to represent a physical port,
physical function, or a virtual function. SVIF is compared during L2
context and exact match lookups in TX direction. SVIF is masked for
port information during L2 context and exact match lookup in RX direction.
Hence, driver needs this SVIF information to program L2 context and Exact
match tables.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  6 ++++++
 drivers/net/bnxt/bnxt_ethdev.c | 14 ++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.c   | 34 ++++++++++++++++++++++++++++++++++
 drivers/net/bnxt/bnxt_hwrm.h   |  1 +
 4 files changed, 55 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index a8e57ca..2ed56f4 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -682,6 +682,9 @@ struct bnxt {
 #define BNXT_FLOW_ID_MASK	0x0000ffff
 	struct bnxt_mark_info	*mark_table;
 
+#define	BNXT_SVIF_INVALID	0xFFFF
+	uint16_t		func_svif;
+	uint16_t		port_svif;
 	struct tf               tfp;
 };
 
@@ -723,4 +726,7 @@ extern int bnxt_logtype_driver;
 
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
+
+uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
+
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 93d0062..f3cc745 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4696,6 +4696,18 @@ static void bnxt_config_vf_req_fwd(struct bnxt *bp)
 	ALLOW_FUNC(HWRM_VNIC_TPA_CFG);
 }
 
+uint16_t
+bnxt_get_svif(uint16_t port_id, bool func_svif)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port_id];
+	bp = eth_dev->data->dev_private;
+
+	return func_svif ? bp->func_svif : bp->port_svif;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
@@ -4731,6 +4743,8 @@ static int bnxt_init_fw(struct bnxt *bp)
 	if (rc)
 		return rc;
 
+	bnxt_hwrm_port_mac_qcfg(bp);
+
 	rc = bnxt_hwrm_cfa_adv_flow_mgmt_qcaps(bp);
 	if (rc)
 		return rc;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 443553b..0eaf917 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3010,6 +3010,8 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	struct hwrm_func_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint16_t flags;
 	int rc = 0;
+	bp->func_svif = BNXT_SVIF_INVALID;
+	uint16_t svif_info;
 
 	HWRM_PREP(&req, HWRM_FUNC_QCFG, BNXT_USE_CHIMP_MB);
 	req.fid = rte_cpu_to_le_16(0xffff);
@@ -3020,6 +3022,12 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	/* Hard Coded.. 0xfff VLAN ID mask */
 	bp->vlan = rte_le_to_cpu_16(resp->vlan) & 0xfff;
+
+	svif_info = rte_le_to_cpu_16(resp->svif_info);
+	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
+		bp->func_svif =	svif_info &
+				     HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+
 	flags = rte_le_to_cpu_16(resp->flags);
 	if (BNXT_PF(bp) && (flags & HWRM_FUNC_QCFG_OUTPUT_FLAGS_MULTI_HOST))
 		bp->flags |= BNXT_FLAG_MULTI_HOST;
@@ -3056,6 +3064,32 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 	return rc;
 }
 
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp)
+{
+	struct hwrm_port_mac_qcfg_input req = {0};
+	struct hwrm_port_mac_qcfg_output *resp = bp->hwrm_cmd_resp_addr;
+	uint16_t port_svif_info;
+	int rc;
+
+	bp->port_svif = BNXT_SVIF_INVALID;
+
+	HWRM_PREP(&req, HWRM_PORT_MAC_QCFG, BNXT_USE_CHIMP_MB);
+
+	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+
+	HWRM_CHECK_RESULT();
+
+	port_svif_info = rte_le_to_cpu_16(resp->port_svif_info);
+	if (port_svif_info &
+	    HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_VALID)
+		bp->port_svif = port_svif_info &
+			HWRM_PORT_MAC_QCFG_OUTPUT_PORT_SVIF_INFO_PORT_SVIF_MASK;
+
+	HWRM_UNLOCK();
+
+	return 0;
+}
+
 static void copy_func_cfg_to_qcaps(struct hwrm_func_cfg_input *fcfg,
 				   struct hwrm_func_qcaps_output *qcaps)
 {
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index df7aa74..0079d8a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -193,6 +193,7 @@ int bnxt_hwrm_port_qstats(struct bnxt *bp);
 int bnxt_hwrm_port_clr_stats(struct bnxt *bp);
 int bnxt_hwrm_port_led_cfg(struct bnxt *bp, bool led_on);
 int bnxt_hwrm_port_led_qcaps(struct bnxt *bp);
+int bnxt_hwrm_port_mac_qcfg(struct bnxt *bp);
 int bnxt_hwrm_func_cfg_vf_set_flags(struct bnxt *bp, uint16_t vf,
 					uint32_t flags);
 void vf_vnic_set_rxmask_cb(struct bnxt_vnic_info *vnic, void *flagp);
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 14/34] net/bnxt: fetch vnic info from DPDK port
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (12 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
                         ` (22 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

VNIC is needed for the driver to program the action record for rx
flows. VNIC determines what receive rings to use to place the received
packets. This patch introduces a routine that will convert a given
dpdk port to VNIC.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 15 +++++++++++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 2ed56f4..c4507f7 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -727,6 +727,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index f3cc745..57ed90f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4708,6 +4708,21 @@ bnxt_get_svif(uint16_t port_id, bool func_svif)
 	return func_svif ? bp->func_svif : bp->port_svif;
 }
 
+uint16_t
+bnxt_get_vnic_id(uint16_t port)
+{
+	struct rte_eth_dev *eth_dev;
+	struct bnxt_vnic_info *vnic;
+	struct bnxt *bp;
+
+	eth_dev = &rte_eth_devices[port];
+	bp = eth_dev->data->dev_private;
+
+	vnic = BNXT_GET_DEFAULT_VNIC(bp);
+
+	return vnic->fw_vnic_id;
+}
+
 static int bnxt_init_fw(struct bnxt *bp)
 {
 	uint16_t mtu;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (13 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
                         ` (21 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

This feature can be enabled by passing
"-w 0000:0d:00.0,host-based-truflow=1” to the DPDK application.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  4 ++-
 drivers/net/bnxt/bnxt_ethdev.c | 73 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 76 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index c4507f7..cd84ebd 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -685,7 +685,9 @@ struct bnxt {
 #define	BNXT_SVIF_INVALID	0xFFFF
 	uint16_t		func_svif;
 	uint16_t		port_svif;
-	struct tf               tfp;
+
+	struct tf		tfp;
+	uint8_t			truflow;
 };
 
 int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 57ed90f..c4bbf1d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -12,6 +12,7 @@
 #include <rte_malloc.h>
 #include <rte_cycles.h>
 #include <rte_alarm.h>
+#include <rte_kvargs.h>
 
 #include "bnxt.h"
 #include "bnxt_filter.h"
@@ -126,6 +127,18 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 				     DEV_RX_OFFLOAD_SCATTER | \
 				     DEV_RX_OFFLOAD_RSS_HASH)
 
+#define BNXT_DEVARG_TRUFLOW	"host-based-truflow"
+static const char *const bnxt_dev_args[] = {
+	BNXT_DEVARG_TRUFLOW,
+	NULL
+};
+
+/*
+ * truflow == false to disable the feature
+ * truflow == true to enable the feature
+ */
+#define	BNXT_DEVARG_TRUFLOW_INVALID(truflow)	((truflow) > 1)
+
 static int bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask);
 static void bnxt_print_link_info(struct rte_eth_dev *eth_dev);
 static int bnxt_dev_uninit(struct rte_eth_dev *eth_dev);
@@ -4854,6 +4867,63 @@ static int bnxt_init_resources(struct bnxt *bp, bool reconfig_dev)
 }
 
 static int
+bnxt_parse_devarg_truflow(__rte_unused const char *key,
+			  const char *value, void *opaque_arg)
+{
+	struct bnxt *bp = opaque_arg;
+	unsigned long truflow;
+	char *end = NULL;
+
+	if (!value || !opaque_arg) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid parameter passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	truflow = strtoul(value, &end, 10);
+	if (end == NULL || *end != '\0' ||
+	    (truflow == ULONG_MAX && errno == ERANGE)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid parameter passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	if (BNXT_DEVARG_TRUFLOW_INVALID(truflow)) {
+		PMD_DRV_LOG(ERR,
+			    "Invalid value passed to truflow devargs.\n");
+		return -EINVAL;
+	}
+
+	bp->truflow = truflow;
+	if (bp->truflow)
+		PMD_DRV_LOG(INFO, "Host-based truflow feature enabled.\n");
+
+	return 0;
+}
+
+static void
+bnxt_parse_dev_args(struct bnxt *bp, struct rte_devargs *devargs)
+{
+	struct rte_kvargs *kvlist;
+
+	if (devargs == NULL)
+		return;
+
+	kvlist = rte_kvargs_parse(devargs->args, bnxt_dev_args);
+	if (kvlist == NULL)
+		return;
+
+	/*
+	 * Handler for "truflow" devarg.
+	 * Invoked as for ex: "-w 0000:00:0d.0,host-based-truflow=1”
+	 */
+	rte_kvargs_process(kvlist, BNXT_DEVARG_TRUFLOW,
+			   bnxt_parse_devarg_truflow, bp);
+
+	rte_kvargs_free(kvlist);
+}
+
+static int
 bnxt_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
@@ -4879,6 +4949,9 @@ bnxt_dev_init(struct rte_eth_dev *eth_dev)
 
 	bp = eth_dev->data->dev_private;
 
+	/* Parse dev arguments passed on when starting the DPDK application. */
+	bnxt_parse_dev_args(bp, pci_dev->device.devargs);
+
 	bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE;
 
 	if (bnxt_vf_pciid(pci_dev->id.device_id))
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 16/34] net/bnxt: add support for ULP session manager init
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (14 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
                         ` (20 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   6 +-
 drivers/net/bnxt/bnxt.h                       |   5 +
 drivers/net/bnxt/bnxt_ethdev.c                |   4 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  35 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            | 527 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            | 100 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 187 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  77 ++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c        |  94 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h        |  49 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  27 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  35 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  40 ++
 13 files changed, 1185 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 4c95847..bb9b888 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -44,7 +44,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt_rxtx_vec_sse.c
 endif
 
 ifeq ($(CONFIG_RTE_LIBRTE_BNXT_PMD), y)
-CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core
+CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core -I$(SRCDIR)/tf_ulp
 endif
 
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c
@@ -57,6 +57,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index cd84ebd..cd20740 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -22,6 +22,7 @@
 #include "bnxt_util.h"
 
 #include "tf_core.h"
+#include "bnxt_ulp.h"
 
 /* Vendor ID */
 #define PCI_VENDOR_ID_BROADCOM		0x14E4
@@ -687,6 +688,7 @@ struct bnxt {
 	uint16_t		port_svif;
 
 	struct tf		tfp;
+	struct bnxt_ulp_context	ulp_ctx;
 	uint8_t			truflow;
 };
 
@@ -729,6 +731,9 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+int32_t bnxt_ulp_init(struct bnxt *bp);
+void bnxt_ulp_deinit(struct bnxt *bp);
+
 uint16_t bnxt_get_vnic_id(uint16_t port);
 uint16_t bnxt_get_svif(uint16_t port_id, bool func_svif);
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c4bbf1d..1703ce3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -904,6 +904,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 	pthread_mutex_lock(&bp->def_cp_lock);
 	bnxt_schedule_fw_health_check(bp);
 	pthread_mutex_unlock(&bp->def_cp_lock);
+
+	if (bp->truflow)
+		bnxt_ulp_init(bp);
+
 	return 0;
 
 error:
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
new file mode 100644
index 0000000..3516df4
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_TF_COMMON_H_
+#define _BNXT_TF_COMMON_H_
+
+#define BNXT_TF_DBG(lvl, fmt, args...)	PMD_DRV_LOG(lvl, fmt, ## args)
+
+#define BNXT_ULP_EM_FLOWS			8192
+#define BNXT_ULP_1M_FLOWS			1000000
+#define BNXT_EEM_RX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_TX_GLOBAL_ID_MASK		(BNXT_ULP_1M_FLOWS - 1)
+#define BNXT_EEM_HASH_KEY2_USED			0x8000000
+#define BNXT_EEM_RX_HW_HASH_KEY2_BIT		BNXT_ULP_1M_FLOWS
+#define	BNXT_ULP_DFLT_RX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_RX_MEM			0
+#define	BNXT_ULP_RX_NUM_FLOWS			32
+#define	BNXT_ULP_RX_TBL_IF_ID			0
+#define	BNXT_ULP_DFLT_TX_MAX_KEY		512
+#define	BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY		256
+#define	BNXT_ULP_DFLT_TX_MEM			0
+#define	BNXT_ULP_TX_NUM_FLOWS			32
+#define	BNXT_ULP_TX_TBL_IF_ID			0
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl);
+
+#endif /* _BNXT_TF_COMMON_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
new file mode 100644
index 0000000..7afc6bf
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -0,0 +1,527 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include <rte_tailq.h>
+
+#include "bnxt_ulp.h"
+#include "bnxt_tf_common.h"
+#include "bnxt.h"
+#include "tf_core.h"
+#include "tf_ext_flow_handle.h"
+
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+
+/* Linked list of all TF sessions. */
+STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
+			STAILQ_HEAD_INITIALIZER(bnxt_ulp_session_list);
+
+/* Mutex to synchronize bnxt_ulp_session_list operations. */
+static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+/*
+ * Initialize an ULP session.
+ * An ULP session will contain all the resources needed to support rte flow
+ * offloads. A session is initialized as part of rte_eth_device start.
+ * A single vswitch instance can have multiple uplinks which means
+ * rte_eth_device start will be called for each of these devices.
+ * ULP session manager will make sure that a single ULP session is only
+ * initialized once. Apart from this, it also initializes MARK database,
+ * EEM table & flow database. ULP session manager also manages a list of
+ * all opened ULP sessions.
+ */
+static int32_t
+ulp_ctx_session_open(struct bnxt *bp,
+		     struct bnxt_ulp_session_state *session)
+{
+	struct rte_eth_dev		*ethdev = bp->eth_dev;
+	int32_t				rc = 0;
+	struct tf_open_session_parms	params;
+
+	memset(&params, 0, sizeof(params));
+
+	rc = rte_eth_dev_get_name_by_port(ethdev->data->port_id,
+					  params.ctrl_chan_name);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Invalid port %d, rc = %d\n",
+			    ethdev->data->port_id, rc);
+		return rc;
+	}
+
+	rc = tf_open_session(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
+			    params.ctrl_chan_name, rc);
+		return -EINVAL;
+	}
+	session->session_opened = 1;
+	session->g_tfp = &bp->tfp;
+	return rc;
+}
+
+static void
+bnxt_init_tbl_scope_parms(struct bnxt *bp,
+			  struct tf_alloc_tbl_scope_parms *params)
+{
+	struct bnxt_ulp_device_params	*dparms;
+	uint32_t dev_id;
+	int rc;
+
+	rc = bnxt_ulp_cntxt_dev_id_get(&bp->ulp_ctx, &dev_id);
+	if (rc)
+		/* TBD: For now, just use default. */
+		dparms = 0;
+	else
+		dparms = bnxt_ulp_device_params_get(dev_id);
+
+	if (!dparms) {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = BNXT_ULP_RX_NUM_FLOWS;
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = BNXT_ULP_TX_NUM_FLOWS;
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	} else {
+		params->rx_max_key_sz_in_bits = BNXT_ULP_DFLT_RX_MAX_KEY;
+		params->rx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_RX_MAX_ACTN_ENTRY;
+		params->rx_mem_size_in_mb = BNXT_ULP_DFLT_RX_MEM;
+		params->rx_num_flows_in_k = dparms->num_flows / (1024);
+		params->rx_tbl_if_id = BNXT_ULP_RX_TBL_IF_ID;
+
+		params->tx_max_key_sz_in_bits = BNXT_ULP_DFLT_TX_MAX_KEY;
+		params->tx_max_action_entry_sz_in_bits =
+			BNXT_ULP_DFLT_TX_MAX_ACTN_ENTRY;
+		params->tx_mem_size_in_mb = BNXT_ULP_DFLT_TX_MEM;
+		params->tx_num_flows_in_k = dparms->num_flows / (1024);
+		params->tx_tbl_if_id = BNXT_ULP_TX_TBL_IF_ID;
+	}
+}
+
+/* Initialize Extended Exact Match host memory. */
+static int32_t
+ulp_eem_tbl_scope_init(struct bnxt *bp)
+{
+	struct tf_alloc_tbl_scope_parms params = {0};
+	int rc;
+
+	bnxt_init_tbl_scope_parms(bp, &params);
+
+	rc = tf_alloc_tbl_scope(&bp->tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate eem table scope rc = %d\n",
+			    rc);
+		return rc;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_set(&bp->ulp_ctx, params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to set table scope id\n");
+		return rc;
+	}
+
+	return 0;
+}
+
+/* The function to free and deinit the ulp context data. */
+static int32_t
+ulp_ctx_deinit(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Free the contents */
+	if (session->cfg_data) {
+		rte_free(session->cfg_data);
+		bp->ulp_ctx.cfg_data = NULL;
+		session->cfg_data = NULL;
+	}
+	return 0;
+}
+
+/* The function to allocate and initialize the ulp context data. */
+static int32_t
+ulp_ctx_init(struct bnxt *bp,
+	     struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_data	*ulp_data;
+	int32_t			rc = 0;
+
+	if (!session || !bp) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Allocate memory to hold ulp context data. */
+	ulp_data = rte_zmalloc("bnxt_ulp_data",
+			       sizeof(struct bnxt_ulp_data), 0);
+	if (!ulp_data) {
+		BNXT_TF_DBG(ERR, "Failed to allocate memory for ulp data\n");
+		return -ENOMEM;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	bp->ulp_ctx.cfg_data = ulp_data;
+	session->cfg_data = ulp_data;
+	ulp_data->ref_cnt++;
+
+	/* Open the ulp session. */
+	rc = ulp_ctx_session_open(bp, session);
+	if (rc) {
+		(void)ulp_ctx_deinit(bp, session);
+		return rc;
+	}
+	bnxt_ulp_cntxt_tfp_set(&bp->ulp_ctx, session->g_tfp);
+	return rc;
+}
+
+static int32_t
+ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
+	       struct bnxt_ulp_session_state *session)
+{
+	if (!ulp_ctx || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Increment the ulp context data reference count usage. */
+	ulp_ctx->cfg_data = session->cfg_data;
+	ulp_ctx->cfg_data->ref_cnt++;
+
+	/* TBD call TF_session_attach. */
+	ulp_ctx->g_tfp = session->g_tfp;
+	return 0;
+}
+
+/*
+ * Initialize the state of an ULP session.
+ * If the state of an ULP session is not initialized, set it's state to
+ * initialized. If the state is already initialized, do nothing.
+ */
+static void
+ulp_context_initialized(struct bnxt_ulp_session_state *session, bool *init)
+{
+	pthread_mutex_lock(&session->bnxt_ulp_mutex);
+
+	if (!session->bnxt_ulp_init) {
+		session->bnxt_ulp_init = true;
+		*init = false;
+	} else {
+		*init = true;
+	}
+
+	pthread_mutex_unlock(&session->bnxt_ulp_mutex);
+}
+
+/*
+ * Check if an ULP session is already allocated for a specific PCI
+ * domain & bus. If it is already allocated simply return the session
+ * pointer, otherwise allocate a new session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_get_session(struct rte_pci_addr *pci_addr)
+{
+	struct bnxt_ulp_session_state *session;
+
+	STAILQ_FOREACH(session, &bnxt_ulp_session_list, next) {
+		if (session->pci_info.domain == pci_addr->domain &&
+		    session->pci_info.bus == pci_addr->bus) {
+			return session;
+		}
+	}
+	return NULL;
+}
+
+/*
+ * Allocate and Initialize an ULP session and set it's state to INITIALIZED.
+ * If it's already initialized simply return the already existing session.
+ */
+static struct bnxt_ulp_session_state *
+ulp_session_init(struct bnxt *bp,
+		 bool *init)
+{
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+	struct bnxt_ulp_session_state	*session;
+
+	if (!bp)
+		return NULL;
+
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+
+	session = ulp_get_session(pci_addr);
+	if (!session) {
+		/* Not Found the session  Allocate a new one */
+		session = rte_zmalloc("bnxt_ulp_session",
+				      sizeof(struct bnxt_ulp_session_state),
+				      0);
+		if (!session) {
+			BNXT_TF_DBG(ERR,
+				    "Allocation failed for bnxt_ulp_session\n");
+			pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+			return NULL;
+
+		} else {
+			/* Add it to the queue */
+			session->pci_info.domain = pci_addr->domain;
+			session->pci_info.bus = pci_addr->bus;
+			pthread_mutex_init(&session->bnxt_ulp_mutex, NULL);
+			STAILQ_INSERT_TAIL(&bnxt_ulp_session_list,
+					   session, next);
+		}
+	}
+	ulp_context_initialized(session, init);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	return session;
+}
+
+/*
+ * When a port is initialized by dpdk. This functions is called
+ * and this function initializes the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+int32_t
+bnxt_ulp_init(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state *session;
+	bool init;
+	int rc;
+
+	/*
+	 * Multiple uplink ports can be associated with a single vswitch.
+	 * Make sure only the port that is started first will initialize
+	 * the TF session.
+	 */
+	session = ulp_session_init(bp, &init);
+	if (!session) {
+		BNXT_TF_DBG(ERR, "Failed to initialize the tf session\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * If ULP is already initialized for a specific domain then simply
+	 * assign the ulp context to this rte_eth_dev.
+	 */
+	if (init) {
+		rc = ulp_ctx_attach(&bp->ulp_ctx, session);
+		if (rc) {
+			BNXT_TF_DBG(ERR,
+				    "Failed to attach the ulp context\n");
+		}
+		return rc;
+	}
+
+	/* Allocate and Initialize the ulp context. */
+	rc = ulp_ctx_init(bp, session);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the ulp context\n");
+		goto jump_to_error;
+	}
+
+	/* Create the Mark database. */
+	rc = ulp_mark_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the mark database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the flow database. */
+	rc = ulp_flow_db_init(&bp->ulp_ctx);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the flow database\n");
+		goto jump_to_error;
+	}
+
+	/* Create the eem table scope. */
+	rc = ulp_eem_tbl_scope_init(bp);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to create the eem scope table\n");
+		goto jump_to_error;
+	}
+
+	return rc;
+
+jump_to_error:
+	return -ENOMEM;
+}
+
+/* Below are the access functions to access internal data of ulp context. */
+
+/* Function to set the Mark DB into the context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+				struct bnxt_ulp_mark_tbl *mark_tbl)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->mark_tbl = mark_tbl;
+
+	return 0;
+}
+
+/* Function to retrieve the Mark DB from the context. */
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return NULL;
+
+	return ulp_ctx->cfg_data->mark_tbl;
+}
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->dev_id = dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx,
+			  uint32_t *dev_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*dev_id = ulp_ctx->cfg_data->dev_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		*tbl_scope_id = ulp_ctx->cfg_data->tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id)
+{
+	if (ulp_ctx && ulp_ctx->cfg_data) {
+		ulp_ctx->cfg_data->tbl_scope_id = tbl_scope_id;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/* Function to set the tfp session details from the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return -EINVAL;
+	}
+
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	ulp->g_tfp = tfp;
+	return 0;
+}
+
+/* Function to get the tfp session details from the ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp)
+{
+	if (!ulp) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	return ulp->g_tfp;
+}
+
+/*
+ * Get the device table entry based on the device id.
+ *
+ * dev_id [in] The device id of the hardware
+ *
+ * Returns the pointer to the device parameters.
+ */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id)
+{
+	if (dev_id < BNXT_ULP_MAX_NUM_DEVICES)
+		return &ulp_device_params[dev_id];
+	return NULL;
+}
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return -EINVAL;
+	}
+
+	ulp_ctx->cfg_data->flow_db = flow_db;
+	return 0;
+}
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx)
+{
+	if (!ulp_ctx || !ulp_ctx->cfg_data) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context data\n");
+		return NULL;
+	}
+
+	return ulp_ctx->cfg_data->flow_db;
+}
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev	*dev)
+{
+	struct bnxt	*bp;
+
+	bp = (struct bnxt *)dev->data->dev_private;
+	if (!bp) {
+		BNXT_TF_DBG(ERR, "Bnxt private data is not initialized\n");
+		return NULL;
+	}
+	return &bp->ulp_ctx;
+}
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
new file mode 100644
index 0000000..d88225f
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _BNXT_ULP_H_
+#define _BNXT_ULP_H_
+
+#include <inttypes.h>
+#include <stdbool.h>
+#include <sys/queue.h>
+
+#include "rte_ethdev.h"
+
+struct bnxt_ulp_data {
+	uint32_t			tbl_scope_id;
+	struct bnxt_ulp_mark_tbl	*mark_tbl;
+	uint32_t			dev_id; /* Hardware device id */
+	uint32_t			ref_cnt;
+	struct bnxt_ulp_flow_db		*flow_db;
+};
+
+struct bnxt_ulp_context {
+	struct bnxt_ulp_data	*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf		*g_tfp;
+};
+
+struct bnxt_ulp_pci_info {
+	uint32_t	domain;
+	uint8_t		bus;
+};
+
+struct bnxt_ulp_session_state {
+	STAILQ_ENTRY(bnxt_ulp_session_state)	next;
+	bool					bnxt_ulp_init;
+	pthread_mutex_t				bnxt_ulp_mutex;
+	struct bnxt_ulp_pci_info		pci_info;
+	struct bnxt_ulp_data			*cfg_data;
+	/* TBD The tfp should be removed once tf_attach is implemented. */
+	struct tf				*g_tfp;
+	uint32_t				session_opened;
+};
+
+/* ULP flow id structure */
+struct rte_tf_flow {
+	uint32_t	flow_id;
+};
+
+/* Function to set the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
+
+/* Function to get the device id of the hardware. */
+int32_t
+bnxt_ulp_cntxt_dev_id_get(struct bnxt_ulp_context *ulp_ctx, uint32_t *dev_id);
+
+/* Function to set the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_set(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t tbl_scope_id);
+
+/* Function to get the table scope id of the EEM table. */
+int32_t
+bnxt_ulp_cntxt_tbl_scope_id_get(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t *tbl_scope_id);
+
+/* Function to set the tfp session details in the ulp context. */
+int32_t
+bnxt_ulp_cntxt_tfp_set(struct bnxt_ulp_context *ulp, struct tf *tfp);
+
+/* Function to get the tfp session details from ulp context. */
+struct tf *
+bnxt_ulp_cntxt_tfp_get(struct bnxt_ulp_context *ulp);
+
+/* Get the device table entry based on the device id. */
+struct bnxt_ulp_device_params *
+bnxt_ulp_device_params_get(uint32_t dev_id);
+
+int32_t
+bnxt_ulp_ctxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
+			       struct bnxt_ulp_mark_tbl *mark_tbl);
+
+struct bnxt_ulp_mark_tbl *
+bnxt_ulp_ctxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
+
+/* Function to set the flow database to the ulp context. */
+int32_t
+bnxt_ulp_cntxt_ptr2_flow_db_set(struct bnxt_ulp_context	*ulp_ctx,
+				struct bnxt_ulp_flow_db	*flow_db);
+
+/* Function to get the flow database from the ulp context. */
+struct bnxt_ulp_flow_db	*
+bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx);
+
+/* Function to get the ulp context from eth device. */
+struct bnxt_ulp_context	*
+bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev *dev);
+
+#endif /* _BNXT_ULP_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
new file mode 100644
index 0000000..3dd39c1
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -0,0 +1,187 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_malloc.h>
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_flow_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Helper function to allocate the flow table and initialize
+ * the stack for allocation operations.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+static int32_t
+ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			   enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	uint32_t			idx = 0;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	uint32_t			size;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	size = sizeof(struct ulp_fdb_resource_info) * flow_tbl->num_resources;
+	flow_tbl->flow_resources =
+			rte_zmalloc("ulp_fdb_resource_info", size, 0);
+
+	if (!flow_tbl->flow_resources) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory for flow table\n");
+		return -ENOMEM;
+	}
+	size = sizeof(uint32_t) * flow_tbl->num_resources;
+	flow_tbl->flow_tbl_stack = rte_zmalloc("flow_tbl_stack", size, 0);
+	if (!flow_tbl->flow_tbl_stack) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory flow tbl stack\n");
+		return -ENOMEM;
+	}
+	size = (flow_tbl->num_flows / sizeof(uint64_t)) + 1;
+	flow_tbl->active_flow_tbl = rte_zmalloc("active flow tbl", size, 0);
+	if (!flow_tbl->active_flow_tbl) {
+		BNXT_TF_DBG(ERR, "Failed to alloc memory active tbl\n");
+		return -ENOMEM;
+	}
+
+	/* Initialize the stack table. */
+	for (idx = 0; idx < flow_tbl->num_resources; idx++)
+		flow_tbl->flow_tbl_stack[idx] = idx;
+
+	/* Ignore the first element in the list. */
+	flow_tbl->head_index = 1;
+	/* Tail points to the last entry in the list. */
+	flow_tbl->tail_index = flow_tbl->num_resources - 1;
+	return 0;
+}
+
+/*
+ * Helper function to de allocate the flow table.
+ *
+ * flow_db [in] Ptr to flow database structure
+ * tbl_idx [in] The index to table creation.
+ *
+ * Returns none.
+ */
+static void
+ulp_flow_db_dealloc_resource(struct bnxt_ulp_flow_db *flow_db,
+			     enum bnxt_ulp_flow_db_tables tbl_idx)
+{
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* Free all the allocated tables in the flow table. */
+	if (flow_tbl->active_flow_tbl) {
+		rte_free(flow_tbl->active_flow_tbl);
+		flow_tbl->active_flow_tbl = NULL;
+	}
+
+	if (flow_tbl->flow_tbl_stack) {
+		rte_free(flow_tbl->flow_tbl_stack);
+		flow_tbl->flow_tbl_stack = NULL;
+	}
+
+	if (flow_tbl->flow_resources) {
+		rte_free(flow_tbl->flow_resources);
+		flow_tbl->flow_resources = NULL;
+	}
+}
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_device_params		*dparms;
+	struct bnxt_ulp_flow_tbl		*flow_tbl;
+	struct bnxt_ulp_flow_db			*flow_db;
+	uint32_t				dev_id;
+
+	/* Get the dev specific number of flows that needed to be supported. */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctxt, &dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(ERR, "could not fetch the device params\n");
+		return -ENODEV;
+	}
+
+	flow_db = rte_zmalloc("bnxt_ulp_flow_db",
+			      sizeof(struct bnxt_ulp_flow_db), 0);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR,
+			    "Failed to allocate memory for flow table ptr\n");
+		goto error_free;
+	}
+
+	/* Attach the flow database to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, flow_db);
+
+	/* Populate the regular flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_REGULAR_FLOW_TABLE];
+	flow_tbl->num_flows = dparms->num_flows + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   dparms->num_resources_per_flow);
+
+	/* Populate the default flow table limits. */
+	flow_tbl = &flow_db->flow_tbl[BNXT_ULP_DEFAULT_FLOW_TABLE];
+	flow_tbl->num_flows = BNXT_FLOW_DB_DEFAULT_NUM_FLOWS + 1;
+	flow_tbl->num_resources = (flow_tbl->num_flows *
+				   BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES);
+
+	/* Allocate the resource for the regular flow table. */
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE))
+		goto error_free;
+	if (ulp_flow_db_alloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE))
+		goto error_free;
+
+	/* All good so return. */
+	return 0;
+error_free:
+	ulp_flow_db_deinit(ulp_ctxt);
+	return -ENOMEM;
+}
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
+{
+	struct bnxt_ulp_flow_db			*flow_db;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	/* Detach the flow database from the ulp context. */
+	bnxt_ulp_cntxt_ptr2_flow_db_set(ulp_ctxt, NULL);
+
+	/* Free up all the memory. */
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_REGULAR_FLOW_TABLE);
+	ulp_flow_db_dealloc_resource(flow_db, BNXT_ULP_DEFAULT_FLOW_TABLE);
+	rte_free(flow_db);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
new file mode 100644
index 0000000..a2ee8fa
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_FLOW_DB_H_
+#define _ULP_FLOW_DB_H_
+
+#include "bnxt_ulp.h"
+#include "ulp_template_db.h"
+
+#define BNXT_FLOW_DB_DEFAULT_NUM_FLOWS		128
+#define BNXT_FLOW_DB_DEFAULT_NUM_RESOURCES	5
+
+/* Structure for the flow database resource information. */
+struct ulp_fdb_resource_info {
+	/* Points to next resource in the chained list. */
+	uint32_t	nxt_resource_idx;
+	union {
+		uint64_t	resource_em_handle;
+		struct {
+			uint32_t	resource_type;
+			uint32_t	resource_hndl;
+		};
+	};
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_tbl {
+	/* Flow tbl is the resource object list for each flow id. */
+	struct ulp_fdb_resource_info	*flow_resources;
+
+	/* Flow table stack to track free list of resources. */
+	uint32_t	*flow_tbl_stack;
+	uint32_t	head_index;
+	uint32_t	tail_index;
+
+	/* Table to track the active flows. */
+	uint64_t	*active_flow_tbl;
+	uint32_t	num_flows;
+	uint32_t	num_resources;
+};
+
+/* Flow database supports two tables. */
+enum bnxt_ulp_flow_db_tables {
+	BNXT_ULP_REGULAR_FLOW_TABLE,
+	BNXT_ULP_DEFAULT_FLOW_TABLE,
+	BNXT_ULP_FLOW_TABLE_MAX
+};
+
+/* Structure for the flow database resource information. */
+struct bnxt_ulp_flow_db {
+	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
+};
+
+/*
+ * Initialize the flow database. Memory is allocated in this
+ * call and assigned to the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
+
+/*
+ * Deinitialize the flow database. Memory is deallocated in
+ * this call and all flows should have been purged before this
+ * call.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ *
+ * Returns 0 on success.
+ */
+int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
+
+#endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
new file mode 100644
index 0000000..3f28a73
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include "bnxt_ulp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "bnxt_tf_common.h"
+#include "../bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ *
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_device_params *dparms;
+	struct bnxt_ulp_mark_tbl *mark_tbl = NULL;
+	uint32_t dev_id;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(DEBUG, "Invalid ULP CTXT\n");
+		return -EINVAL;
+	}
+
+	if (bnxt_ulp_cntxt_dev_id_get(ctxt, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to device parms\n");
+		return -EINVAL;
+	}
+
+	mark_tbl = rte_zmalloc("ulp_rx_mark_tbl_ptr",
+			       sizeof(struct bnxt_ulp_mark_tbl), 0);
+	if (!mark_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->lfid_tbl = rte_zmalloc("ulp_rx_em_flow_mark_table",
+					 dparms->lfid_entries *
+					    sizeof(struct bnxt_lfid_mark_info),
+					 0);
+
+	if (!mark_tbl->lfid_tbl)
+		goto mem_error;
+
+	/* Need to allocate 2 * Num flows to account for hash type bit. */
+	mark_tbl->gfid_tbl = rte_zmalloc("ulp_rx_eem_flow_mark_table",
+					 2 * dparms->num_flows *
+					    sizeof(struct bnxt_gfid_mark_info),
+					 0);
+	if (!mark_tbl->gfid_tbl)
+		goto mem_error;
+
+	/*
+	 * TBD: This needs to be generalized for better mark handling
+	 * These values are used to compress the FID to the allowable index
+	 * space.  The FID from hw may be the full hash.
+	 */
+	mark_tbl->gfid_max	= dparms->gfid_entries - 1;
+	mark_tbl->gfid_mask	= (dparms->gfid_entries / 2) - 1;
+	mark_tbl->gfid_type_bit = (dparms->gfid_entries / 2);
+
+	BNXT_TF_DBG(DEBUG, "GFID Max = 0x%08x\nGFID MASK = 0x%08x\n",
+		    mark_tbl->gfid_max,
+		    mark_tbl->gfid_mask);
+
+	/* Add the mart tbl to the ulp context. */
+	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
+
+	return 0;
+
+mem_error:
+	rte_free(mark_tbl->gfid_tbl);
+	rte_free(mark_tbl->lfid_tbl);
+	rte_free(mark_tbl);
+	BNXT_TF_DBG(DEBUG,
+		    "Failed to allocate memory for mark mgr\n");
+
+	return -ENOMEM;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
new file mode 100644
index 0000000..b175abd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -0,0 +1,49 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MARK_MGR_H_
+#define _ULP_MARK_MGR_H_
+
+#include "bnxt_ulp.h"
+
+#define ULP_MARK_INVALID (0)
+struct bnxt_lfid_mark_info {
+	uint16_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_gfid_mark_info {
+	uint32_t	mark_id;
+	bool		valid;
+};
+
+struct bnxt_ulp_mark_tbl {
+	struct bnxt_lfid_mark_info	*lfid_tbl;
+	struct bnxt_gfid_mark_info	*gfid_tbl;
+	uint32_t			gfid_mask;
+	uint32_t			gfid_type_bit;
+	uint32_t			gfid_max;
+};
+
+/*
+ * Allocate and Initialize all Mark Manager resources for this ulp context.
+ *
+ * Initialize MARK database for GFID & LFID tables
+ * GFID: Global flow id which is based on EEM hash id.
+ * LFID: Local flow id which is the CFA action pointer.
+ * GFID is used for EEM flows, LFID is used for EM flows.
+ *
+ * Flow mapper modules adds mark_id in the MARK database.
+ *
+ * BNXT PMD receive handler extracts the hardware flow id from the
+ * received completion record. Fetches mark_id from the MARK
+ * database using the flow id. Injects mark_id into the packet's mbuf.
+ *
+ * ctxt [in] The ulp context for the mark manager.
+ */
+int32_t
+ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
+
+#endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
new file mode 100644
index 0000000..9670635
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+struct bnxt_ulp_device_params ulp_device_params[] = {
+	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
+		.global_fid_enable       = BNXT_ULP_SYM_YES,
+		.byte_order              = (enum bnxt_ulp_byte_order)
+						BNXT_ULP_SYM_LITTLE_ENDIAN,
+		.encap_byte_swap         = 1,
+		.lfid_entries            = 16384,
+		.lfid_entry_size         = 4,
+		.gfid_entries            = 65536,
+		.gfid_entry_size         = 4,
+		.num_flows               = 32768,
+		.num_resources_per_flow  = 8
+	}
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
new file mode 100644
index 0000000..ba2a101
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+/*
+ * date: Mon Mar  9 02:37:53 2020
+ * version: 0.0
+ */
+
+#ifndef ULP_TEMPLATE_DB_H_
+#define ULP_TEMPLATE_DB_H_
+
+#define BNXT_ULP_MAX_NUM_DEVICES 4
+
+enum bnxt_ulp_byte_order {
+	BNXT_ULP_BYTE_ORDER_BE,
+	BNXT_ULP_BYTE_ORDER_LE,
+	BNXT_ULP_BYTE_ORDER_LAST
+};
+
+enum bnxt_ulp_device_id {
+	BNXT_ULP_DEVICE_ID_WH_PLUS,
+	BNXT_ULP_DEVICE_ID_THOR,
+	BNXT_ULP_DEVICE_ID_STINGRAY,
+	BNXT_ULP_DEVICE_ID_STINGRAY2,
+	BNXT_ULP_DEVICE_ID_LAST
+};
+
+enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_YES = 1
+};
+
+#endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
new file mode 100644
index 0000000..4b9d0b2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_TEMPLATE_STRUCT_H_
+#define _ULP_TEMPLATE_STRUCT_H_
+
+#include <stdint.h>
+#include "rte_ether.h"
+#include "rte_icmp.h"
+#include "rte_ip.h"
+#include "rte_tcp.h"
+#include "rte_udp.h"
+#include "rte_esp.h"
+#include "rte_sctp.h"
+#include "rte_flow.h"
+#include "tf_core.h"
+
+/* Device specific parameters. */
+struct bnxt_ulp_device_params {
+	uint8_t				description[16];
+	uint32_t			global_fid_enable;
+	enum bnxt_ulp_byte_order	byte_order;
+	uint8_t				encap_byte_swap;
+	uint32_t			lfid_entries;
+	uint32_t			lfid_entry_size;
+	uint64_t			gfid_entries;
+	uint32_t			gfid_entry_size;
+	uint64_t			num_flows;
+	uint32_t			num_resources_per_flow;
+};
+
+/*
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
+ */
+extern struct bnxt_ulp_device_params ulp_device_params[];
+
+#endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 17/34] net/bnxt: add support for ULP session manager cleanup
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (15 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
                         ` (19 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

A ULP session will contain all the resources needed to support
rte flow offloads. A session is initialized as part of rte_eth_device
start. A DPDK application can have multiple interfaces which
means rte_eth_device start will be called for each of these devices.
ULP session manager will make sure that a single ULP session is only
initialized once. Apart from this, it also initializes MARK database,
EEM table & flow database. ULP session manager also manages a list of
all opened ULP sessions.

This patch adds support for cleaning up resources initialized for ULP
sessions.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c         |   3 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c     | 167 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h     |  10 ++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  25 +++++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |   8 ++
 5 files changed, 212 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1703ce3..2f08921 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -951,6 +951,9 @@ static void bnxt_dev_stop_op(struct rte_eth_dev *eth_dev)
 	struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 	struct rte_intr_handle *intr_handle = &pci_dev->intr_handle;
 
+	if (bp->truflow)
+		bnxt_ulp_deinit(bp);
+
 	eth_dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use */
 	eth_dev->rx_pkt_burst = &bnxt_dummy_recv_pkts;
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 7afc6bf..3795c6d 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -28,6 +28,27 @@ STAILQ_HEAD(, bnxt_ulp_session_state) bnxt_ulp_session_list =
 static pthread_mutex_t bnxt_ulp_global_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 /*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *ptr)
+{
+	struct bnxt *bp = (struct bnxt *)ptr;
+
+	if (!bp)
+		return 0;
+
+	if (&bp->tfp == bp->ulp_ctx.g_tfp)
+		return 1;
+
+	return 0;
+}
+
+/*
  * Initialize an ULP session.
  * An ULP session will contain all the resources needed to support rte flow
  * offloads. A session is initialized as part of rte_eth_device start.
@@ -67,6 +88,22 @@ ulp_ctx_session_open(struct bnxt *bp,
 	return rc;
 }
 
+/*
+ * Close the ULP session.
+ * It takes the ulp context pointer.
+ */
+static void
+ulp_ctx_session_close(struct bnxt *bp,
+		      struct bnxt_ulp_session_state *session)
+{
+	/* close the session in the hardware */
+	if (session->session_opened)
+		tf_close_session(&bp->tfp);
+	session->session_opened = 0;
+	session->g_tfp = NULL;
+	bp->ulp_ctx.g_tfp = NULL;
+}
+
 static void
 bnxt_init_tbl_scope_parms(struct bnxt *bp,
 			  struct tf_alloc_tbl_scope_parms *params)
@@ -138,6 +175,41 @@ ulp_eem_tbl_scope_init(struct bnxt *bp)
 	return 0;
 }
 
+/* Free Extended Exact Match host memory */
+static int32_t
+ulp_eem_tbl_scope_deinit(struct bnxt *bp, struct bnxt_ulp_context *ulp_ctx)
+{
+	struct tf_free_tbl_scope_parms	params = {0};
+	struct tf			*tfp;
+	int32_t				rc = 0;
+
+	if (!ulp_ctx || !ulp_ctx->cfg_data)
+		return -EINVAL;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get the truflow pointer\n");
+		return -EINVAL;
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp_ctx, &params.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get the table scope id\n");
+		return -EINVAL;
+	}
+
+	rc = tf_free_tbl_scope(tfp, &params);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to free table scope\n");
+		return -EINVAL;
+	}
+	return rc;
+}
+
 /* The function to free and deinit the ulp context data. */
 static int32_t
 ulp_ctx_deinit(struct bnxt *bp,
@@ -148,6 +220,9 @@ ulp_ctx_deinit(struct bnxt *bp,
 		return -EINVAL;
 	}
 
+	/* close the tf session */
+	ulp_ctx_session_close(bp, session);
+
 	/* Free the contents */
 	if (session->cfg_data) {
 		rte_free(session->cfg_data);
@@ -211,6 +286,36 @@ ulp_ctx_attach(struct bnxt_ulp_context *ulp_ctx,
 	return 0;
 }
 
+static int32_t
+ulp_ctx_detach(struct bnxt *bp,
+	       struct bnxt_ulp_session_state *session)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+
+	if (!bp || !session) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	ulp_ctx = &bp->ulp_ctx;
+
+	if (!ulp_ctx->cfg_data)
+		return 0;
+
+	/* TBD call TF_session_detach */
+
+	/* Increment the ulp context data reference count usage. */
+	if (ulp_ctx->cfg_data->ref_cnt >= 1) {
+		ulp_ctx->cfg_data->ref_cnt--;
+		if (ulp_ctx_deinit_allowed(bp))
+			ulp_ctx_deinit(bp, session);
+		ulp_ctx->cfg_data = NULL;
+		ulp_ctx->g_tfp = NULL;
+		return 0;
+	}
+	BNXT_TF_DBG(ERR, "context deatach on invalid data\n");
+	return 0;
+}
+
 /*
  * Initialize the state of an ULP session.
  * If the state of an ULP session is not initialized, set it's state to
@@ -297,6 +402,26 @@ ulp_session_init(struct bnxt *bp,
 }
 
 /*
+ * When a device is closed, remove it's associated session from the global
+ * session list.
+ */
+static void
+ulp_session_deinit(struct bnxt_ulp_session_state *session)
+{
+	if (!session)
+		return;
+
+	if (!session->cfg_data) {
+		pthread_mutex_lock(&bnxt_ulp_global_mutex);
+		STAILQ_REMOVE(&bnxt_ulp_session_list, session,
+			      bnxt_ulp_session_state, next);
+		pthread_mutex_destroy(&session->bnxt_ulp_mutex);
+		rte_free(session);
+		pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+	}
+}
+
+/*
  * When a port is initialized by dpdk. This functions is called
  * and this function initializes the ULP context and rest of the
  * infrastructure associated with it.
@@ -363,12 +488,52 @@ bnxt_ulp_init(struct bnxt *bp)
 	return rc;
 
 jump_to_error:
+	bnxt_ulp_deinit(bp);
 	return -ENOMEM;
 }
 
 /* Below are the access functions to access internal data of ulp context. */
 
-/* Function to set the Mark DB into the context. */
+/*
+ * When a port is deinit'ed by dpdk. This function is called
+ * and this function clears the ULP context and rest of the
+ * infrastructure associated with it.
+ */
+void
+bnxt_ulp_deinit(struct bnxt *bp)
+{
+	struct bnxt_ulp_session_state	*session;
+	struct rte_pci_device		*pci_dev;
+	struct rte_pci_addr		*pci_addr;
+
+	/* Get the session first */
+	pci_dev = RTE_DEV_TO_PCI(bp->eth_dev->device);
+	pci_addr = &pci_dev->addr;
+	pthread_mutex_lock(&bnxt_ulp_global_mutex);
+	session = ulp_get_session(pci_addr);
+	pthread_mutex_unlock(&bnxt_ulp_global_mutex);
+
+	/* session not found then just exit */
+	if (!session)
+		return;
+
+	/* cleanup the eem table scope */
+	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
+
+	/* cleanup the flow database */
+	ulp_flow_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the Mark database */
+	ulp_mark_db_deinit(&bp->ulp_ctx);
+
+	/* Delete the ulp context and tf session */
+	ulp_ctx_detach(bp, session);
+
+	/* Finally delete the bnxt session*/
+	ulp_session_deinit(session);
+}
+
+/* Function to set the Mark DB into the context */
 int32_t
 bnxt_ulp_cntxt_ptr2_mark_db_set(struct bnxt_ulp_context *ulp_ctx,
 				struct bnxt_ulp_mark_tbl *mark_tbl)
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
index d88225f..b3e9e96 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h
@@ -47,6 +47,16 @@ struct rte_tf_flow {
 	uint32_t	flow_id;
 };
 
+/*
+ * Allow the deletion of context only for the bnxt device that
+ * created the session
+ * TBD - The implementation of the function should change to
+ * using the reference count once tf_session_attach functionality
+ * is fixed.
+ */
+bool
+ulp_ctx_deinit_allowed(void *bp);
+
 /* Function to set the device id of the hardware. */
 int32_t
 bnxt_ulp_cntxt_dev_id_set(struct bnxt_ulp_context *ulp_ctx, uint32_t dev_id);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 3f28a73..9e4307e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -92,3 +92,28 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 
 	return -ENOMEM;
 }
+
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+
+	if (mtbl) {
+		rte_free(mtbl->gfid_tbl);
+		rte_free(mtbl->lfid_tbl);
+		rte_free(mtbl);
+
+		/* Safe to ignore on deinit */
+		(void)bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, NULL);
+	}
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index b175abd..5948683 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -46,4 +46,12 @@ struct bnxt_ulp_mark_tbl {
 int32_t
 ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Release all resources in the Mark Manager for this ulp context
+ *
+ * ctxt [in] The ulp context for the mark manager
+ */
+int32_t
+ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 18/34] net/bnxt: add helper functions for blob/regfile ops
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (16 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
                         ` (18 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

1. blob routines for managing key/mask/result data
2. regfile routines for managing temporal data during flow
   construction

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                 |   2 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.h |  12 +
 drivers/net/bnxt/tf_ulp/ulp_utils.c       | 521 ++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_utils.h       | 279 ++++++++++++++++
 4 files changed, 814 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index bb9b888..4e0dea1 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -61,6 +61,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
+
 #
 # Export include files
 #
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index ba2a101..1eed828 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -27,6 +27,18 @@ enum bnxt_ulp_device_id {
 	BNXT_ULP_DEVICE_ID_LAST
 };
 
+enum bnxt_ulp_fmf_mask {
+	BNXT_ULP_FMF_MASK_IGNORE,
+	BNXT_ULP_FMF_MASK_ANY,
+	BNXT_ULP_FMF_MASK_EXACT,
+	BNXT_ULP_FMF_MASK_WILDCARD,
+	BNXT_ULP_FMF_MASK_LAST
+};
+
+enum bnxt_ulp_regfile_index {
+	BNXT_ULP_REGFILE_INDEX_LAST
+};
+
 enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.c b/drivers/net/bnxt/tf_ulp/ulp_utils.c
new file mode 100644
index 0000000..1d463cd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.c
@@ -0,0 +1,521 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_utils.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile)
+{
+	/* validate the arguments */
+	if (!regfile) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memset(regfile, 0, sizeof(struct ulp_regfile));
+	return 1; /* Success */
+}
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance. Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * data [in/out]
+ *
+ * returns size, zero on failure
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	*data = regfile->entry[field].data;
+	return sizeof(*data);
+}
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * size [in] The size in bytes of the value beingritten into this
+ * variable.
+ *
+ * returns 0 on fail
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data)
+{
+	/* validate the arguments */
+	if (!regfile || field >= BNXT_ULP_REGFILE_INDEX_LAST) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	regfile->entry[field].data = data;
+	return sizeof(data); /* Success */
+}
+
+static void
+ulp_bs_put_msb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	int8_t shift;
+
+	tmp = bs[index];
+	mask = ((uint8_t)-1 >> (8 - bitlen));
+	shift = 8 - bitoffs - bitlen;
+	val &= mask;
+
+	if (shift >= 0) {
+		tmp &= ~(mask << shift);
+		tmp |= val << shift;
+		bs[index] = tmp;
+	} else {
+		tmp &= ~((uint8_t)-1 >> bitoffs);
+		tmp |= val >> -shift;
+		bs[index++] = tmp;
+
+		tmp = bs[index];
+		tmp &= ((uint8_t)-1 >> (bitlen - (8 - bitoffs)));
+		tmp |= val << (8 + shift);
+		bs[index] = tmp;
+	}
+}
+
+static void
+ulp_bs_put_lsb(uint8_t *bs, uint16_t bitpos, uint8_t bitlen, uint8_t val)
+{
+	uint8_t bitoffs = bitpos % 8;
+	uint16_t index  = bitpos / 8;
+	uint8_t mask;
+	uint8_t tmp;
+	uint8_t shift;
+	uint8_t partial;
+
+	tmp = bs[index];
+	shift = bitoffs;
+
+	if (bitoffs + bitlen <= 8) {
+		mask = ((1 << bitlen) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index] = tmp;
+	} else {
+		partial = 8 - bitoffs;
+		mask = ((1 << partial) - 1) << shift;
+		tmp &= ~mask;
+		tmp |= ((val << shift) & mask);
+		bs[index++] = tmp;
+
+		val >>= partial;
+		partial = bitlen - partial;
+		mask = ((1 << partial) - 1);
+		tmp = bs[index];
+		tmp &= ~mask;
+		tmp |= (val & mask);
+		bs[index] = tmp;
+	}
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_lsb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len) / 8;
+	int tlen = len;
+
+	if (cnt > 0 && !(len % 8))
+		cnt -= 1;
+
+	for (i = 0; i < cnt; i++) {
+		ulp_bs_put_lsb(bs, pos, 8, val[cnt - i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	/* Handle the remainder bits */
+	if (tlen)
+		ulp_bs_put_lsb(bs, pos, tlen, val[0]);
+	return len;
+}
+
+/* Assuming that val is in Big-Endian Format */
+static uint32_t
+ulp_bs_push_msb(uint8_t *bs, uint16_t pos, uint8_t len, uint8_t *val)
+{
+	int i;
+	int cnt = (len + 7) / 8;
+	int tlen = len;
+
+	/* Handle any remainder bits */
+	int tmp = len % 8;
+
+	if (!tmp)
+		tmp = 8;
+
+	ulp_bs_put_msb(bs, pos, tmp, val[0]);
+
+	pos += tmp;
+	tlen -= tmp;
+
+	for (i = 1; i < cnt; i++) {
+		ulp_bs_put_msb(bs, pos, 8, val[i]);
+		pos += 8;
+		tlen -= 8;
+	}
+
+	return len;
+}
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order)
+{
+	/* validate the arguments */
+	if (!blob || bitlen > (8 * sizeof(blob->data))) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	blob->bitlen = bitlen;
+	blob->byte_order = order;
+	blob->write_idx = 0;
+	memset(blob->data, 0, sizeof(blob->data));
+	return 1; /* Success */
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+#define ULP_BLOB_BYTE		8
+#define ULP_BLOB_BYTE_HEX	0xFF
+#define BLOB_MASK_CAL(x)	((0xFF << (x)) & 0xFF)
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen)
+{
+	uint32_t rc;
+
+	/* validate the arguments */
+	if (!blob || datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+
+	if (blob->byte_order == BNXT_ULP_BYTE_ORDER_BE)
+		rc = ulp_bs_push_msb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	else
+		rc = ulp_bs_push_lsb(blob->data,
+				     blob->write_idx,
+				     datalen,
+				     data);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Failed ro write blob\n");
+		return 0;
+	}
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen)
+{
+	uint8_t *val = (uint8_t *)data;
+	int rc;
+
+	int size = (datalen + 7) / 8;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	rc = ulp_blob_push(blob, &val[8 - size], datalen);
+	if (!rc)
+		return 0;
+
+	return &val[8 - size];
+}
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen)
+{
+	uint8_t		*val = (uint8_t *)data;
+	uint32_t	initial_size, write_size = datalen;
+	uint32_t	size = 0;
+
+	if (!blob || !data ||
+	    datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0;
+	}
+
+	initial_size = ULP_BYTE_2_BITS(sizeof(uint64_t)) -
+	    (blob->write_idx % ULP_BYTE_2_BITS(sizeof(uint64_t)));
+	while (write_size > 0) {
+		if (initial_size && write_size > initial_size) {
+			size = initial_size;
+			initial_size = 0;
+		} else if (initial_size && write_size <= initial_size) {
+			size = write_size;
+			initial_size = 0;
+		} else if (write_size > ULP_BYTE_2_BITS(sizeof(uint64_t))) {
+			size = ULP_BYTE_2_BITS(sizeof(uint64_t));
+		} else {
+			size = write_size;
+		}
+		if (!ulp_blob_push(blob, val, size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return 0;
+		}
+		val += ULP_BITS_2_BYTE(size);
+		write_size -= size;
+	}
+	return datalen;
+}
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen)
+{
+	if (datalen > (uint32_t)(blob->bitlen - blob->write_idx)) {
+		BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+		return 0;
+	}
+
+	blob->write_idx += datalen;
+	return datalen;
+}
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return NULL; /* failure */
+	}
+	*datalen = blob->write_idx;
+	return blob->data;
+}
+
+/*
+ * Set the encap swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob)
+{
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	blob->encap_swap_idx = blob->write_idx;
+}
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob)
+{
+	uint32_t		i, idx = 0, end_idx = 0;
+	uint8_t		temp_val_1, temp_val_2;
+
+	/* validate the arguments */
+	if (!blob) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return; /* failure */
+	}
+	idx = ULP_BITS_2_BYTE_NR(blob->encap_swap_idx + 1);
+	end_idx = ULP_BITS_2_BYTE(blob->write_idx);
+
+	while (idx <= end_idx) {
+		for (i = 0; i < 4; i = i + 2) {
+			temp_val_1 = blob->data[idx + i];
+			temp_val_2 = blob->data[idx + i + 1];
+			blob->data[idx + i] = blob->data[idx + 6 - i];
+			blob->data[idx + i + 1] = blob->data[idx + 7 - i];
+			blob->data[idx + 7 - i] = temp_val_2;
+			blob->data[idx + 6 - i] = temp_val_1;
+		}
+		idx += 8;
+	}
+}
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bytes [in] The number of bytes to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bytes)
+{
+	/* validate the arguments */
+	if (!operand || !val) {
+		BNXT_TF_DBG(ERR, "invalid argument\n");
+		return 0; /* failure */
+	}
+	memcpy(val, operand, bytes);
+	return bytes;
+}
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size)
+{
+	uint16_t	idx = 0;
+
+	/* copy 2 bytes at a time. Write MSB to LSB */
+	while ((idx + sizeof(uint16_t)) <= size) {
+		memcpy(&dst[idx], &src[size - idx - sizeof(uint16_t)],
+		       sizeof(uint16_t));
+		idx += sizeof(uint16_t);
+	}
+}
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ *
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size)
+{
+	return buf[0] == 0 && !memcmp(buf, buf + 1, size - 1);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_utils.h b/drivers/net/bnxt/tf_ulp/ulp_utils.h
new file mode 100644
index 0000000..db88546
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_utils.h
@@ -0,0 +1,279 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_UTILS_H_
+#define _ULP_UTILS_H_
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are power of 2.
+ */
+#define ULP_BITMAP_SET(bitmap, val)	((bitmap) |= (val))
+#define ULP_BITMAP_RESET(bitmap, val)	((bitmap) &= ~(val))
+#define ULP_BITMAP_ISSET(bitmap, val)	((bitmap) & (val))
+#define ULP_BITSET_CMP(b1, b2)  memcmp(&(b1)->bits, \
+				&(b2)->bits, sizeof((b1)->bits))
+/*
+ * Macros for bitmap sets and gets
+ * These macros can be used if the val are not power of 2 and
+ * are simple index values.
+ */
+#define ULP_INDEX_BITMAP_SIZE	(sizeof(uint64_t) * 8)
+#define ULP_INDEX_BITMAP_CSET(i)	(1UL << \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))
+
+#define ULP_INDEX_BITMAP_SET(b, i)	((b) |= \
+			(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))))
+
+#define ULP_INDEX_BITMAP_RESET(b, i)	((b) &= \
+			(~(1UL << ((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE)))))
+
+#define ULP_INDEX_BITMAP_GET(b, i)		(((b) >> \
+			((ULP_INDEX_BITMAP_SIZE - 1) - \
+			((i) % ULP_INDEX_BITMAP_SIZE))) & 1)
+
+#define ULP_DEVICE_PARAMS_INDEX(tid, dev_id)	\
+	(((tid) << BNXT_ULP_LOG2_MAX_NUM_DEV) | (dev_id))
+
+/* Macro to convert bytes to bits */
+#define ULP_BYTE_2_BITS(byte_x)		((byte_x) * 8)
+/* Macro to convert bits to bytes */
+#define ULP_BITS_2_BYTE(bits_x)		(((bits_x) + 7) / 8)
+/* Macro to convert bits to bytes with no round off*/
+#define ULP_BITS_2_BYTE_NR(bits_x)	((bits_x) / 8)
+
+/*
+ * Making the blob statically sized to 128 bytes for now.
+ * The blob must be initialized with ulp_blob_init prior to using.
+ */
+#define BNXT_ULP_FLMP_BLOB_SIZE	(128)
+#define BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS	ULP_BYTE_2_BITS(BNXT_ULP_FLMP_BLOB_SIZE)
+struct ulp_blob {
+	enum bnxt_ulp_byte_order	byte_order;
+	uint16_t			write_idx;
+	uint16_t			bitlen;
+	uint8_t				data[BNXT_ULP_FLMP_BLOB_SIZE];
+	uint16_t			encap_swap_idx;
+};
+
+/*
+ * The data can likely be only 32 bits for now.  Just size check
+ * the data when being written.
+ */
+#define ULP_REGFILE_ENTRY_SIZE	(sizeof(uint32_t))
+struct ulp_regfile_entry {
+	uint64_t	data;
+	uint32_t	size;
+};
+
+struct ulp_regfile {
+	struct ulp_regfile_entry entry[BNXT_ULP_REGFILE_INDEX_LAST];
+};
+
+/*
+ * Initialize the regfile structure for writing
+ *
+ * regfile [in] Ptr to a regfile instance
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_regfile_init(struct ulp_regfile *regfile);
+
+/*
+ * Read a value from the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be read within the regfile.
+ *
+ * returns the byte array
+ */
+uint32_t
+ulp_regfile_read(struct ulp_regfile *regfile,
+		 enum bnxt_ulp_regfile_index field,
+		 uint64_t *data);
+
+/*
+ * Write a value to the regfile
+ *
+ * regfile [in] The regfile instance.  Must be initialized prior to being used
+ *
+ * field [in] The field to be written within the regfile.
+ *
+ * data [in] The value is written into this variable.  It is going to be in the
+ * same byte order as it was written.
+ *
+ * returns zero on error
+ */
+uint32_t
+ulp_regfile_write(struct ulp_regfile *regfile,
+		  enum bnxt_ulp_regfile_index field,
+		  uint64_t data);
+
+/*
+ * Initializes the blob structure for creating binary blob
+ *
+ * blob [in] The blob to be initialized
+ *
+ * bitlen [in] The bit length of the blob
+ *
+ * order [in] The byte order for the blob.  Currently only supporting
+ * big endian.  All fields are packed with this order.
+ *
+ * returns 0 on error or 1 on success
+ */
+uint32_t
+ulp_blob_init(struct ulp_blob *blob,
+	      uint16_t bitlen,
+	      enum bnxt_ulp_byte_order order);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] A pointer to bytes to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error.
+ */
+uint32_t
+ulp_blob_push(struct ulp_blob *blob,
+	      uint8_t *data,
+	      uint32_t datalen);
+
+/*
+ * Add data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] 64-bit value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, ptr to pushed data otherwise
+ */
+uint8_t *
+ulp_blob_push_64(struct ulp_blob *blob,
+		 uint64_t *data,
+		 uint32_t datalen);
+
+/*
+ * Add encap data to the binary blob at the current offset.
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * data [in] value to be added to the blob.
+ *
+ * datalen [in] The number of bits to be added ot the blob.
+ *
+ * The offset of the data is updated after each push of data.
+ * NULL returned on error, pointer pushed value otherwise.
+ */
+uint32_t
+ulp_blob_push_encap(struct ulp_blob *blob,
+		    uint8_t *data,
+		    uint32_t datalen);
+
+/*
+ * Get the data portion of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * datalen [out] The number of bits to that are filled.
+ *
+ * returns a byte array of the blob data.  Returns NULL on error.
+ */
+uint8_t *
+ulp_blob_data_get(struct ulp_blob *blob,
+		  uint16_t *datalen);
+
+/*
+ * Adds pad to an initialized blob at the current offset
+ *
+ * blob [in] The blob that data is added to.  The blob must
+ * be initialized prior to pushing data.
+ *
+ * datalen [in] The number of bits of pad to add
+ *
+ * returns the number of pad bits added, zero on failure
+ */
+uint32_t
+ulp_blob_pad_push(struct ulp_blob *blob,
+		  uint32_t datalen);
+
+/*
+ * Set the 64 bit swap start index of the binary blob.
+ *
+ * blob [in] The blob's data to be retrieved. The blob must be
+ * initialized prior to pushing data.
+ *
+ * returns void.
+ */
+void
+ulp_blob_encap_swap_idx_set(struct ulp_blob *blob);
+
+/*
+ * Perform the encap buffer swap to 64 bit reversal.
+ *
+ * blob [in] The blob's data to be used for swap.
+ *
+ * returns void.
+ */
+void
+ulp_blob_perform_encap_swap(struct ulp_blob *blob);
+
+/*
+ * Read data from the operand
+ *
+ * operand [in] A pointer to a 16 Byte operand
+ *
+ * val [in/out] The variable to copy the operand to
+ *
+ * bitlen [in] The number of bits to read into val
+ *
+ * returns number of bits read, zero on error
+ */
+uint16_t
+ulp_operand_read(uint8_t *operand,
+		 uint8_t *val,
+		 uint16_t bitlen);
+
+/*
+ * copy the buffer in the encap format which is 2 bytes.
+ * The MSB of the src is placed at the LSB of dst.
+ *
+ * dst [out] The destination buffer
+ * src [in] The source buffer dst
+ * size[in] size of the buffer.
+ */
+void
+ulp_encap_buffer_copy(uint8_t *dst,
+		      const uint8_t *src,
+		      uint16_t size);
+
+/*
+ * Check the buffer is empty
+ *
+ * buf [in] The buffer
+ * size [in] The size of the buffer
+ */
+int32_t ulp_buffer_is_empty(const uint8_t *buf, uint32_t size);
+
+#endif /* _ULP_UTILS_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 19/34] net/bnxt: add support to process action tables
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (17 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
                         ` (17 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch processes the action template. Iterates through the list
of action info templates and processes it.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         | 136 ++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  25 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 364 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  39 +++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 245 +++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     | 104 ++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  48 +++-
 8 files changed, 959 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 4e0dea1..f464d9e 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -62,6 +62,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mark_mgr.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 3dd39c1..6e73f25 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -7,7 +7,68 @@
 #include "bnxt.h"
 #include "bnxt_tf_common.h"
 #include "ulp_flow_db.h"
+#include "ulp_utils.h"
 #include "ulp_template_struct.h"
+#include "ulp_mapper.h"
+
+#define ULP_FLOW_DB_RES_DIR_BIT		31
+#define ULP_FLOW_DB_RES_DIR_MASK	0x80000000
+#define ULP_FLOW_DB_RES_FUNC_BITS	28
+#define ULP_FLOW_DB_RES_FUNC_MASK	0x70000000
+#define ULP_FLOW_DB_RES_NXT_MASK	0x0FFFFFFF
+
+/* Macro to copy the nxt_resource_idx */
+#define ULP_FLOW_DB_RES_NXT_SET(dst, src)	{(dst) |= ((src) &\
+					 ULP_FLOW_DB_RES_NXT_MASK); }
+#define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
+
+/*
+ * Helper function to allocate the flow table and initialize
+ *  is set.No validation being done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ *
+ * returns 1 on set or 0 if not set.
+ */
+static int32_t
+ulp_flow_db_active_flow_is_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			       uint32_t			idx)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	return ULP_INDEX_BITMAP_GET(flow_tbl->active_flow_tbl[active_index],
+				    idx);
+}
+
+/*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [out] Ptr to resource information
+ * params [in] The input params from the caller
+ * returns none
+ */
+static void
+ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	resource_info->nxt_resource_idx |= ((params->direction <<
+				      ULP_FLOW_DB_RES_DIR_BIT) &
+				     ULP_FLOW_DB_RES_DIR_MASK);
+	resource_info->nxt_resource_idx |= ((params->resource_func <<
+					     ULP_FLOW_DB_RES_FUNC_BITS) &
+					    ULP_FLOW_DB_RES_FUNC_MASK);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		resource_info->resource_hndl = (uint32_t)params->resource_hndl;
+		resource_info->resource_type = params->resource_type;
+
+	} else {
+		resource_info->resource_em_handle = params->resource_hndl;
+	}
+}
 
 /*
  * Helper function to allocate the flow table and initialize
@@ -185,3 +246,78 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 
 	return 0;
 }
+
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*resource, *fid_resource;
+	uint32_t			idx;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	/* check for max resource */
+	if ((flow_tbl->num_flows + 1) >= flow_tbl->tail_index) {
+		BNXT_TF_DBG(ERR, "Flow db has reached max resources\n");
+		return -ENOMEM;
+	}
+	fid_resource = &flow_tbl->flow_resources[fid];
+
+	if (!params->critical_resource) {
+		/* Not the critical_resource so allocate a resource */
+		idx = flow_tbl->flow_tbl_stack[flow_tbl->tail_index];
+		resource = &flow_tbl->flow_resources[idx];
+		flow_tbl->tail_index--;
+
+		/* Update the chain list of resource*/
+		ULP_FLOW_DB_RES_NXT_SET(resource->nxt_resource_idx,
+					fid_resource->nxt_resource_idx);
+		/* update the contents */
+		ulp_flow_db_res_params_to_info(resource, params);
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					idx);
+	} else {
+		/* critical resource. Just update the fid resource */
+		ulp_flow_db_res_params_to_info(fid_resource, params);
+	}
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index a2ee8fa..f6055a5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -53,6 +53,15 @@ struct bnxt_ulp_flow_db {
 	struct bnxt_ulp_flow_tbl	flow_tbl[BNXT_ULP_FLOW_TABLE_MAX];
 };
 
+/* flow db resource params to add resources */
+struct ulp_flow_db_res_params {
+	enum tf_dir			direction;
+	enum bnxt_ulp_resource_func	resource_func;
+	uint64_t			resource_hndl;
+	uint32_t			resource_type;
+	uint32_t			critical_resource;
+};
+
 /*
  * Initialize the flow database. Memory is allocated in this
  * call and assigned to the flow database.
@@ -74,4 +83,20 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
  */
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
+/*
+ * Allocate the flow database entry.
+ * The params->critical_resource has to be set to 0 to allocate a new resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in] The contents to be copied into resource
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
new file mode 100644
index 0000000..9cfc382
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -0,0 +1,364 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_utils.h"
+#include "bnxt_ulp.h"
+#include "tfp.h"
+#include "tf_ext_flow_handle.h"
+#include "ulp_mark_mgr.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
+/*
+ * Get the size of the action property for a given index.
+ *
+ * idx [in] The index for the action property
+ *
+ * returns the size of the action property.
+ */
+static uint32_t
+ulp_mapper_act_prop_size_get(uint32_t idx)
+{
+	if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST)
+		return 0;
+	return ulp_act_prop_map_table[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action
+ *
+ * tbl [in] A single table instance to get the results fields
+ * from num_flds [out] The number of data fields in the returned
+ * array
+ * returns array of data fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
+				 uint32_t *num_rslt_flds,
+				 uint32_t *num_encap_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_rslt_flds || !num_encap_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_rslt_flds	= tbl->result_num_fields;
+	*num_encap_flds = tbl->encap_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_act_result_field_list[idx];
+}
+
+static int32_t
+ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
+				struct bnxt_ulp_mapper_result_field_info *fld,
+				struct ulp_blob *blob)
+{
+	uint16_t idx, size_idx;
+	uint8_t	 *val = NULL;
+	uint64_t regval;
+	uint32_t val_size = 0, field_size = 0;
+
+	switch (fld->result_opcode) {
+	case BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT:
+		val = fld->result_operand;
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "Failed to add field\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+		field_size = ulp_mapper_act_prop_size_get(idx);
+		if (fld->field_bit_size < ULP_BYTE_2_BITS(field_size)) {
+			field_size  = field_size -
+			    ((fld->field_bit_size + 7) / 8);
+			val += field_size;
+		}
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", idx);
+			return -EINVAL;
+		}
+		val = &parms->act_prop->act_details[idx];
+
+		/* get the size index next */
+		if (!ulp_operand_read(&fld->result_operand[sizeof(uint16_t)],
+				      (uint8_t *)&size_idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+		size_idx = tfp_be_to_cpu_16(size_idx);
+
+		if (size_idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+			BNXT_TF_DBG(ERR, "act_prop[%d] oob\n", size_idx);
+			return -EINVAL;
+		}
+		memcpy(&val_size, &parms->act_prop->act_details[size_idx],
+		       sizeof(uint32_t));
+		val_size = tfp_be_to_cpu_32(val_size);
+		val_size = ULP_BYTE_2_BITS(val_size);
+		ulp_blob_push_encap(blob, val, val_size);
+		break;
+	case BNXT_ULP_RESULT_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&idx, sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "operand read failed\n");
+			return -EINVAL;
+		}
+
+		idx = tfp_be_to_cpu_16(idx);
+		/* Uninitialized regfile entries return 0 */
+		if (!ulp_regfile_read(parms->regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read oob\n", idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, fld->field_bit_size);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push field failed\n");
+			return -EINVAL;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
+ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
+				struct ulp_blob *blob)
+{
+	struct ulp_flow_db_res_params		fid_parms;
+	struct tf_alloc_tbl_entry_parms		alloc_parms = { 0 };
+	struct tf_free_tbl_entry_parms		free_parms = { 0 };
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls = parms->atbls;
+	int32_t					rc = 0;
+	int32_t trc;
+	uint64_t				idx;
+
+	/* Set the allocation parameters for the table*/
+	alloc_parms.dir = atbls->direction;
+	alloc_parms.type = atbls->table_type;
+	alloc_parms.search_enable = atbls->srch_b4_alloc;
+	alloc_parms.result = ulp_blob_data_get(blob,
+					       &alloc_parms.result_sz_in_bytes);
+	if (!alloc_parms.result) {
+		BNXT_TF_DBG(ERR, "blob is not populated\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_tbl_entry(parms->tfp, &alloc_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "table type= [%d] dir = [%s] alloc failed\n",
+			    alloc_parms.type,
+			    (alloc_parms.dir == TF_DIR_RX) ? "RX" : "TX");
+		return rc;
+	}
+
+	/* Need to calculate the idx for the result record */
+	/*
+	 * TBD: Need to get the stride from tflib instead of having to
+	 * understand the constrution of the pointer
+	 */
+	uint64_t tmpidx = alloc_parms.idx;
+
+	if (atbls->table_type == TF_TBL_TYPE_EXT)
+		tmpidx = (alloc_parms.idx * TF_ACTION_RECORD_SZ) >> 4;
+	else
+		tmpidx = alloc_parms.idx;
+
+	idx = tfp_cpu_to_be_64(tmpidx);
+
+	/* Store the allocated index for future use in the regfile */
+	rc = ulp_regfile_write(parms->regfile, atbls->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "regfile[%d] write failed\n",
+			    atbls->regfile_wr_idx);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/*
+	 * The set_tbl_entry API if search is not enabled or searched entry
+	 * is not found.
+	 */
+	if (!atbls->srch_b4_alloc || !alloc_parms.hit) {
+		struct tf_set_tbl_entry_parms set_parm = { 0 };
+		uint16_t	length;
+
+		set_parm.dir	= atbls->direction;
+		set_parm.type	= atbls->table_type;
+		set_parm.idx	= alloc_parms.idx;
+		set_parm.data	= ulp_blob_data_get(blob, &length);
+		set_parm.data_sz_in_bytes = length / 8;
+
+		if (set_parm.type == TF_TBL_TYPE_EXT)
+			bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+							&set_parm.tbl_scope_id);
+		else
+			set_parm.tbl_scope_id = 0;
+
+		/* set the table entry */
+		rc = tf_set_tbl_entry(parms->tfp, &set_parm);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "table[%d][%s][%d] set failed\n",
+				    set_parm.type,
+				    (set_parm.dir == TF_DIR_RX) ? "RX" : "TX",
+				    set_parm.idx);
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= atbls->direction;
+	fid_parms.resource_func		= atbls->resource_func;
+	fid_parms.resource_type		= atbls->table_type;
+	fid_parms.resource_hndl		= alloc_parms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		rc = -EINVAL;
+		goto error;
+	}
+
+	return 0;
+error:
+
+	free_parms.dir	= alloc_parms.dir;
+	free_parms.type	= alloc_parms.type;
+	free_parms.idx	= alloc_parms.idx;
+
+	trc = tf_free_tbl_entry(parms->tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free table entry on failure\n");
+
+	return rc;
+}
+
+/*
+ * Function to process the action Info. Iterate through the list
+ * action info templates and process it.
+ */
+static int32_t
+ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
+			       struct bnxt_ulp_mapper_act_tbl_info *tbl)
+{
+	struct ulp_blob					blob;
+	struct bnxt_ulp_mapper_result_field_info	*flds, *fld;
+	uint32_t					num_flds = 0;
+	uint32_t					encap_flds = 0;
+	uint32_t					i;
+	int32_t						rc;
+	uint16_t					bit_size;
+
+	if (!tbl || !parms->act_prop || !parms->act_bitmap || !parms->regfile)
+		return -EINVAL;
+
+	/* use the max size if encap is enabled */
+	if (tbl->encap_num_fields)
+		bit_size = BNXT_ULP_FLMP_BLOB_SIZE_IN_BITS;
+	else
+		bit_size = tbl->result_bit_size;
+	if (!ulp_blob_init(&blob, bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "action blob init failed\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_act_result_fields_get(tbl, &num_flds, &encap_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < (num_flds + encap_flds); i++) {
+		fld = &flds[i];
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &blob);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Action field failed\n");
+			return rc;
+		}
+		/* set the swap index if 64 bit swap is enabled */
+		if (parms->encap_byte_swap && encap_flds) {
+			if ((i + 1) == num_flds)
+				ulp_blob_encap_swap_idx_set(&blob);
+			/* if 64 bit swap is enabled perform the 64bit swap */
+			if ((i + 1) == (num_flds + encap_flds))
+				ulp_blob_perform_encap_swap(&blob);
+		}
+	}
+
+	rc = ulp_mapper_action_alloc_and_set(parms, &blob);
+	return rc;
+}
+
+/*
+ * Function to process the action template. Iterate through the list
+ * action info templates and process it.
+ */
+int32_t
+ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms->atbls || !parms->num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for template[%d][%d].\n",
+			    parms->dev_id, parms->act_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_atbls; i++) {
+		rc = ulp_mapper_action_info_process(parms, &parms->atbls[i]);
+		if (rc)
+			return rc;
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
new file mode 100644
index 0000000..adbcec2
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2019 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_MAPPER_H_
+#define _ULP_MAPPER_H_
+
+#include <tf_core.h>
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_ulp.h"
+#include "ulp_utils.h"
+
+/* Internal Structure for passing the arguments around */
+struct bnxt_ulp_mapper_parms {
+	uint32_t				dev_id;
+	enum bnxt_ulp_byte_order		order;
+	uint32_t				act_tid;
+	struct bnxt_ulp_mapper_act_tbl_info	*atbls;
+	uint32_t				num_atbls;
+	uint32_t				class_tid;
+	struct bnxt_ulp_mapper_class_tbl_info	*ctbls;
+	uint32_t				num_ctbls;
+	struct ulp_rte_act_prop			*act_prop;
+	struct ulp_rte_act_bitmap		*act_bitmap;
+	struct ulp_rte_hdr_field		*hdr_field;
+	struct ulp_regfile			*regfile;
+	struct tf				*tfp;
+	struct bnxt_ulp_context			*ulp_ctx;
+	uint8_t					encap_byte_swap;
+	uint32_t				fid;
+	enum bnxt_ulp_flow_db_tables		tbl_idx;
+};
+
+#endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 9670635..fc77800 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,89 @@
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+uint32_t ulp_act_prop_map_table[] = {
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM] =
+		BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM,
+	[BNXT_ULP_ACT_PROP_IDX_VNIC] =
+		BNXT_ULP_ACT_PROP_SZ_VNIC,
+	[BNXT_ULP_ACT_PROP_IDX_VPORT] =
+		BNXT_ULP_ACT_PROP_SZ_VPORT,
+	[BNXT_ULP_ACT_PROP_IDX_MARK] =
+		BNXT_ULP_ACT_PROP_SZ_MARK,
+	[BNXT_ULP_ACT_PROP_IDX_COUNT] =
+		BNXT_ULP_ACT_PROP_SZ_COUNT,
+	[BNXT_ULP_ACT_PROP_IDX_METER] =
+		BNXT_ULP_ACT_PROP_SZ_METER,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP,
+	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID] =
+		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_SET_TP_DST] =
+		BNXT_ULP_ACT_PROP_SZ_SET_TP_DST,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6,
+	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7] =
+		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP,
+	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN] =
+		BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN,
+	[BNXT_ULP_ACT_PROP_IDX_LAST] =
+		BNXT_ULP_ACT_PROP_SZ_LAST
+};
+
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
 		.global_fid_enable       = BNXT_ULP_SYM_YES,
@@ -25,3 +108,165 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 		.num_resources_per_flow  = 8
 	}
 };
+
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
+	{
+	.field_bit_size = 14,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {BNXT_ULP_SYM_DECAP_FUNC_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP,
+	.result_operand = {(BNXT_ULP_ACT_PROP_IDX_VNIC >> 8) & 0xff,
+		BNXT_ULP_ACT_PROP_IDX_VNIC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 1eed828..e52cc3f 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -39,9 +39,113 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_LAST
 };
 
+enum bnxt_ulp_resource_func {
+	BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE = 0,
+	BNXT_ULP_RESOURCE_FUNC_EM_TABLE = 1,
+	BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE = 2,
+	BNXT_ULP_RESOURCE_FUNC_IDENTIFIER = 3,
+	BNXT_ULP_RESOURCE_FUNC_HW_FID = 4,
+	BNXT_ULP_RESOURCE_FUNC_LAST = 5
+};
+
+enum bnxt_ulp_result_opc {
+	BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP = 1,
+	BNXT_ULP_RESULT_OPC_SET_TO_ACT_PROP_SZ = 2,
+	BNXT_ULP_RESULT_OPC_SET_TO_REGFILE = 3,
+	BNXT_ULP_RESULT_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
 	BNXT_ULP_SYM_YES = 1
 };
 
+enum bnxt_ulp_act_prop_sz {
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_SZ = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L3_TYPE = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_POP_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_MPLS_PUSH_NUM = 4,
+	BNXT_ULP_ACT_PROP_SZ_VNIC = 4,
+	BNXT_ULP_ACT_PROP_SZ_VPORT = 4,
+	BNXT_ULP_ACT_PROP_SZ_MARK = 4,
+	BNXT_ULP_ACT_PROP_SZ_COUNT = 4,
+	BNXT_ULP_ACT_PROP_SZ_METER = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC = 8,
+	BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST = 8,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST = 16,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_3 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_4 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_5 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_6 = 4,
+	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_7 = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC = 6,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_VTAG = 8,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP = 32,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC = 16,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_UDP = 4,
+	BNXT_ULP_ACT_PROP_SZ_ENCAP_TUN = 32,
+	BNXT_ULP_ACT_PROP_SZ_LAST = 4
+};
+
+enum bnxt_ulp_act_prop_idx {
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ = 0,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ = 4,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ = 8,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_TYPE = 12,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM = 16,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE = 20,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_POP_NUM = 24,
+	BNXT_ULP_ACT_PROP_IDX_MPLS_PUSH_NUM = 28,
+	BNXT_ULP_ACT_PROP_IDX_VNIC = 32,
+	BNXT_ULP_ACT_PROP_IDX_VPORT = 36,
+	BNXT_ULP_ACT_PROP_IDX_MARK = 40,
+	BNXT_ULP_ACT_PROP_IDX_COUNT = 44,
+	BNXT_ULP_ACT_PROP_IDX_METER = 48,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC = 52,
+	BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST = 60,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN = 68,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP = 72,
+	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID = 76,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC = 80,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST = 84,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 88,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 104,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 120,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 124,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 128,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 132,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 136,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 140,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 144,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 148,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 152,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 156,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 160,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 166,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 172,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 180,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 212,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 228,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 232,
+	BNXT_ULP_ACT_PROP_IDX_LAST = 264
+};
+
 #endif /* _ULP_TEMPLATE_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 4b9d0b2..2b0a3d7 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,7 +17,15 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
-/* Device specific parameters. */
+/*
+ * structure to hold the action property details
+ * It is a array of 128 bytes
+ */
+struct ulp_rte_act_prop {
+	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
+};
+
+/* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
 	uint32_t			global_fid_enable;
@@ -31,10 +39,44 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_act_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	enum tf_tbl_type table_type;
+	uint8_t		direction;
+	uint8_t		srch_b4_alloc;
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	encap_num_fields;
+	uint16_t	result_num_fields;
+
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+struct bnxt_ulp_mapper_result_field_info {
+	uint8_t				name[64];
+	enum bnxt_ulp_result_opc	result_opcode;
+	uint16_t			field_bit_size;
+	uint8_t				result_operand[16];
+};
+
 /*
- * The ulp_device_params is indexed by the dev_id.
- * This table maintains the device specific parameters.
+ * The ulp_device_params is indexed by the dev_id
+ * This table maintains the device specific parameters
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
+ * record.  It uses the same structure as the result list, but is only used for
+ * actions.
+ */
+extern
+struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * properties.
+ */
+extern uint32_t ulp_act_prop_map_table[];
+
 #endif /* _ULP_TEMPLATE_STRUCT_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 20/34] net/bnxt: add support to process key tables
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (18 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
                         ` (16 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch creates the classifier table entries for a flow.

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 784 +++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   2 +
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  80 ++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  18 +
 drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 829 ++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h       | 142 +++-
 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h | 130 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  93 ++-
 8 files changed, 2070 insertions(+), 8 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 9cfc382..f378f8e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -19,6 +19,9 @@
 int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
 
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
+
 /*
  * Get the size of the action property for a given index.
  *
@@ -37,10 +40,65 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 /*
  * Get the list of result fields that implement the flow action
  *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the key fields from
+ *
+ * num_flds [out] The number of key fields in the returned array
+ *
+ * returns array of Key fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_class_key_field_info *
+ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			  uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->key_start_idx;
+	*num_flds	= tbl->key_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_key_field_list[idx];
+}
+
+/*
+ * Get the list of data fields that implement the flow.
+ *
+ * ctxt [in] The ulp context
+ *
+ * tbl [in] A single table instance to get the data fields from
+ *
+ * num_flds [out] The number of data fields in the returned array.
+ *
+ * Returns array of data fields, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_result_field_info *
+ulp_mapper_result_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			     uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx		= tbl->result_start_idx;
+	*num_flds	= tbl->result_num_fields;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_class_result_field_list[idx];
+}
+
+/*
+ * Get the list of result fields that implement the flow action.
+ *
  * tbl [in] A single table instance to get the results fields
  * from num_flds [out] The number of data fields in the returned
- * array
- * returns array of data fields, or NULL on error
+ * array.
+ *
+ * Returns array of data fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_result_field_info *
 ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
@@ -60,6 +118,106 @@ ulp_mapper_act_result_fields_get(struct bnxt_ulp_mapper_act_tbl_info *tbl,
 	return &ulp_act_result_field_list[idx];
 }
 
+/*
+ * Get the list of ident fields that implement the flow
+ *
+ * tbl [in] A single table instance to get the ident fields from
+ *
+ * num_flds [out] The number of ident fields in the returned array
+ *
+ * returns array of ident fields, or NULL on error
+ */
+static struct bnxt_ulp_mapper_ident_info *
+ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			    uint32_t *num_flds)
+{
+	uint32_t idx;
+
+	if (!tbl || !num_flds)
+		return NULL;
+
+	idx = tbl->ident_start_idx;
+	*num_flds = tbl->ident_nums;
+
+	/* NOTE: Need template to provide range checking define */
+	return &ulp_ident_list[idx];
+}
+
+static int32_t
+ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
+			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
+			 struct bnxt_ulp_mapper_ident_info *ident)
+{
+	struct ulp_flow_db_res_params	fid_parms;
+	uint64_t id = 0;
+	int32_t idx;
+	struct tf_alloc_identifier_parms iparms = { 0 };
+	struct tf_free_identifier_parms free_parms = { 0 };
+	struct tf *tfp;
+	int rc;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get tf pointer\n");
+		return -EINVAL;
+	}
+
+	idx = ident->regfile_wr_idx;
+
+	iparms.ident_type = ident->ident_type;
+	iparms.dir = tbl->direction;
+
+	rc = tf_alloc_identifier(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc ident %s:%d failed.\n",
+			    (iparms.dir == TF_DIR_RX) ? "RX" : "TX",
+			    iparms.ident_type);
+		return rc;
+	}
+
+	id = (uint64_t)tfp_cpu_to_be_64(iparms.id);
+	if (!ulp_regfile_write(parms->regfile, idx, id)) {
+		BNXT_TF_DBG(ERR, "Regfile[%d] write failed.\n", idx);
+		rc = -EINVAL;
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func	= ident->resource_func;
+	fid_parms.resource_type	= ident->ident_type;
+	fid_parms.resource_hndl	= iparms.id;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+
+error:
+	/* Need to free the identifier */
+	free_parms.dir		= tbl->direction;
+	free_parms.ident_type	= ident->ident_type;
+	free_parms.id		= iparms.id;
+
+	(void)tf_free_identifier(tfp, &free_parms);
+
+	BNXT_TF_DBG(ERR, "Ident process failed for %s:%s\n",
+		    ident->name,
+		    (tbl->direction == TF_DIR_RX) ? "RX" : "TX");
+	return rc;
+}
+
 static int32_t
 ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 				struct bnxt_ulp_mapper_result_field_info *fld,
@@ -163,6 +321,100 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 
 /* Function to alloc action record and set the table. */
 static int32_t
+ulp_mapper_keymask_field_process(struct bnxt_ulp_mapper_parms *parms,
+				 struct bnxt_ulp_mapper_class_key_field_info *f,
+				 struct ulp_blob *blob,
+				 uint8_t is_key)
+{
+	uint64_t regval;
+	uint16_t idx, bitlen;
+	uint32_t opcode;
+	uint8_t *operand;
+	struct ulp_regfile *regfile = parms->regfile;
+	uint8_t *val = NULL;
+	struct bnxt_ulp_mapper_class_key_field_info *fld = f;
+	uint32_t field_size;
+
+	if (is_key) {
+		operand = fld->spec_operand;
+		opcode	= fld->spec_opcode;
+	} else {
+		operand = fld->mask_operand;
+		opcode	= fld->mask_opcode;
+	}
+
+	bitlen = fld->field_bit_size;
+
+	switch (opcode) {
+	case BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT:
+		val = operand;
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_ADD_PAD:
+		if (!ulp_blob_pad_push(blob, bitlen)) {
+			BNXT_TF_DBG(ERR, "Pad too large for blob\n");
+			return -EINVAL;
+		}
+
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+		if (is_key)
+			val = parms->hdr_field[idx].spec;
+		else
+			val = parms->hdr_field[idx].mask;
+
+		/*
+		 * Need to account for how much data was pushed to the header
+		 * field vs how much is to be inserted in the key/mask.
+		 */
+		field_size = parms->hdr_field[idx].size;
+		if (bitlen < ULP_BYTE_2_BITS(field_size)) {
+			field_size  = field_size - ((bitlen + 7) / 8);
+			val += field_size;
+		}
+
+		if (!ulp_blob_push(blob, val, bitlen)) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+		break;
+	case BNXT_ULP_SPEC_OPC_SET_TO_REGFILE:
+		if (!ulp_operand_read(operand, (uint8_t *)&idx,
+				      sizeof(uint16_t))) {
+			BNXT_TF_DBG(ERR, "key operand read failed.\n");
+			return -EINVAL;
+		}
+		idx = tfp_be_to_cpu_16(idx);
+
+		if (!ulp_regfile_read(regfile, idx, &regval)) {
+			BNXT_TF_DBG(ERR, "regfile[%d] read failed.\n",
+				    idx);
+			return -EINVAL;
+		}
+
+		val = ulp_blob_push_64(blob, &regval, bitlen);
+		if (!val) {
+			BNXT_TF_DBG(ERR, "push to key blob failed\n");
+			return -EINVAL;
+		}
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+/* Function to alloc action record and set the table. */
+static int32_t
 ulp_mapper_action_alloc_and_set(struct bnxt_ulp_mapper_parms *parms,
 				struct ulp_blob *blob)
 {
@@ -338,6 +590,489 @@ ulp_mapper_action_info_process(struct bnxt_ulp_mapper_parms *parms,
 	return rc;
 }
 
+static int32_t
+ulp_mapper_tcam_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			    struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct ulp_blob key, mask, data;
+	uint32_t i, num_kflds;
+	struct tf *tfp;
+	int32_t rc, trc;
+	struct tf_alloc_tcam_entry_parms aparms		= { 0 };
+	struct tf_set_tcam_entry_parms sparms		= { 0 };
+	struct ulp_flow_db_res_params	fid_parms	= { 0 };
+	struct tf_free_tcam_entry_parms free_parms	= { 0 };
+	uint32_t hit = 0;
+	uint16_t tmplen = 0;
+
+	tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Failed to get truflow pointer\n");
+		return -EINVAL;
+	}
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	if (!ulp_blob_init(&key, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&mask, tbl->key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key/mask */
+	/*
+	 * NOTE: The WC table will require some kind of flag to handle the
+	 * mode bits within the key/mask
+	 */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+
+		/* Setup the mask */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &mask, 0);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Mask field set failed.\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.tcam_tbl_type	= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.key_sz_in_bits	= tbl->key_bit_size;
+	aparms.key		= ulp_blob_data_get(&key, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Key len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+	if (tbl->key_bit_size != tmplen) {
+		BNXT_TF_DBG(ERR, "Mask len (%d) != Expected (%d)\n",
+			    tmplen, tbl->key_bit_size);
+		return -EINVAL;
+	}
+
+	aparms.priority		= tbl->priority;
+
+	/*
+	 * All failures after this succeeds require the entry to be freed.
+	 * cannot return directly on failure, but needs to goto error
+	 */
+	rc = tf_alloc_tcam_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "tcam alloc failed rc=%d.\n", rc);
+		return rc;
+	}
+
+	hit = aparms.hit;
+
+	/* Build the result */
+	if (!tbl->srch_b4_alloc || !hit) {
+		struct bnxt_ulp_mapper_result_field_info *dflds;
+		struct bnxt_ulp_mapper_ident_info *idents;
+		uint32_t num_dflds, num_idents;
+
+		/* Alloc identifiers */
+		idents = ulp_mapper_ident_fields_get(tbl, &num_idents);
+
+		for (i = 0; i < num_idents; i++) {
+			rc = ulp_mapper_ident_process(parms, tbl, &idents[i]);
+
+			/* Already logged the error, just return */
+			if (rc)
+				goto error;
+		}
+
+		/* Create the result data blob */
+		dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+		if (!dflds || !num_dflds) {
+			BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+			rc = -EINVAL;
+			goto error;
+		}
+
+		for (i = 0; i < num_dflds; i++) {
+			rc = ulp_mapper_result_field_process(parms,
+							     &dflds[i],
+							     &data);
+			if (rc) {
+				BNXT_TF_DBG(ERR, "Failed to set data fields\n");
+				goto error;
+			}
+		}
+
+		sparms.dir		= aparms.dir;
+		sparms.tcam_tbl_type	= aparms.tcam_tbl_type;
+		sparms.idx		= aparms.idx;
+		/* Already verified the key/mask lengths */
+		sparms.key		= ulp_blob_data_get(&key, &tmplen);
+		sparms.mask		= ulp_blob_data_get(&mask, &tmplen);
+		sparms.key_sz_in_bits	= tbl->key_bit_size;
+		sparms.result		= ulp_blob_data_get(&data, &tmplen);
+
+		if (tbl->result_bit_size != tmplen) {
+			BNXT_TF_DBG(ERR, "Result len (%d) != Expected (%d)\n",
+				    tmplen, tbl->result_bit_size);
+			rc = -EINVAL;
+			goto error;
+		}
+		sparms.result_sz_in_bits = tbl->result_bit_size;
+
+		rc = tf_set_tcam_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "tcam[%d][%s][%d] write failed.\n",
+				    sparms.tcam_tbl_type,
+				    (sparms.dir == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx);
+			goto error;
+		}
+	} else {
+		BNXT_TF_DBG(ERR, "Not supporting search before alloc now\n");
+		rc = -EINVAL;
+		goto error;
+	}
+
+	/* Link the resource to the flow in the flow db */
+	fid_parms.direction = tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.critical_resource = tbl->critical_resource;
+	fid_parms.resource_hndl	= aparms.idx;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir			= tbl->direction;
+	free_parms.tcam_tbl_type	= tbl->table_type;
+	free_parms.idx			= aparms.idx;
+	trc = tf_free_tcam_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tcam[%d][%d][%d] on failure\n",
+			    tbl->table_type, tbl->direction, aparms.idx);
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_em_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			  struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_class_key_field_info	*kflds;
+	struct bnxt_ulp_mapper_result_field_info *dflds;
+	struct ulp_blob key, data;
+	uint32_t i, num_kflds, num_dflds;
+	uint16_t tmplen;
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+	struct ulp_rte_act_prop	 *a_prop = parms->act_prop;
+	struct ulp_flow_db_res_params	fid_parms = { 0 };
+	struct tf_insert_em_entry_parms iparms = { 0 };
+	struct tf_delete_em_entry_parms free_parms = { 0 };
+	int32_t	trc;
+	int32_t rc = 0;
+
+	kflds = ulp_mapper_key_fields_get(tbl, &num_kflds);
+	if (!kflds || !num_kflds) {
+		BNXT_TF_DBG(ERR, "Failed to get key fields\n");
+		return -EINVAL;
+	}
+
+	/* Initialize the key/result blobs */
+	if (!ulp_blob_init(&key, tbl->blob_key_bit_size, parms->order) ||
+	    !ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "blob inits failed.\n");
+		return -EINVAL;
+	}
+
+	/* create the key */
+	for (i = 0; i < num_kflds; i++) {
+		/* Setup the key */
+		rc = ulp_mapper_keymask_field_process(parms, &kflds[i],
+						      &key, 1);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Key field set failed.\n");
+			return rc;
+		}
+	}
+
+	/*
+	 * TBD: Normally should process identifiers in case of using recycle or
+	 * loopback.  Not supporting recycle for now.
+	 */
+
+	/* Create the result data blob */
+	dflds = ulp_mapper_result_fields_get(tbl, &num_dflds);
+	if (!dflds || !num_dflds) {
+		BNXT_TF_DBG(ERR, "Failed to get data fields.\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_dflds; i++) {
+		struct bnxt_ulp_mapper_result_field_info *fld;
+
+		fld = &dflds[i];
+
+		rc = ulp_mapper_result_field_process(parms,
+						     fld,
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to set data fields.\n");
+			return rc;
+		}
+	}
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(parms->ulp_ctx,
+					     &iparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope rc=%d\n", rc);
+		return rc;
+	}
+
+	/*
+	 * NOTE: the actual blob size will differ from the size in the tbl
+	 * entry due to the padding.
+	 */
+	iparms.dup_check		= 0;
+	iparms.dir			= tbl->direction;
+	iparms.mem			= tbl->mem;
+	iparms.key			= ulp_blob_data_get(&key, &tmplen);
+	iparms.key_sz_in_bits		= tbl->key_bit_size;
+	iparms.em_record		= ulp_blob_data_get(&data, &tmplen);
+	iparms.em_record_sz_in_bits	= tbl->result_bit_size;
+
+	rc = tf_insert_em_entry(tfp, &iparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to insert em entry rc=%d.\n", rc);
+		return rc;
+	}
+
+	if (tbl->mark_enable &&
+	    ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+			     BNXT_ULP_ACTION_BIT_MARK)) {
+		uint32_t val, mark, gfid, flag;
+		/* TBD: Need to determine if GFID is enabled globally */
+		if (sizeof(val) != BNXT_ULP_ACT_PROP_SZ_MARK) {
+			BNXT_TF_DBG(ERR, "Mark size (%d) != expected (%zu)\n",
+				    BNXT_ULP_ACT_PROP_SZ_MARK, sizeof(val));
+			rc = -EINVAL;
+			goto error;
+		}
+
+		memcpy(&val,
+		       &a_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       sizeof(val));
+
+		mark = tfp_be_to_cpu_32(val);
+
+		TF_GET_GFID_FROM_FLOW_ID(iparms.flow_id, gfid);
+		TF_GET_FLAG_FROM_FLOW_ID(iparms.flow_id, flag);
+
+		rc = ulp_mark_db_mark_add(parms->ulp_ctx,
+					  (flag == TF_GFID_TABLE_EXTERNAL),
+					  gfid,
+					  mark);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Failed to add mark to flow\n");
+			goto error;
+		}
+
+		/*
+		 * Link the mark resource to the flow in the flow db
+		 * The mark is never the critical resource, so it is 0.
+		 */
+		memset(&fid_parms, 0, sizeof(fid_parms));
+		fid_parms.direction	= tbl->direction;
+		fid_parms.resource_func	= BNXT_ULP_RESOURCE_FUNC_HW_FID;
+		fid_parms.resource_type	= tbl->table_type;
+		fid_parms.resource_hndl	= iparms.flow_id;
+		fid_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+					      parms->tbl_idx,
+					      parms->fid,
+					      &fid_parms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+				    rc);
+			/* Need to free the identifier, so goto error */
+			goto error;
+		}
+	}
+
+	/* Link the EM resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction		= tbl->direction;
+	fid_parms.resource_func		= tbl->resource_func;
+	fid_parms.resource_type		= tbl->table_type;
+	fid_parms.critical_resource	= tbl->critical_resource;
+	fid_parms.resource_hndl		= iparms.flow_handle;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Fail to link res to flow rc = %d\n",
+			    rc);
+		/* Need to free the identifier, so goto error */
+		goto error;
+	}
+
+	return 0;
+error:
+	free_parms.dir		= iparms.dir;
+	free_parms.mem		= iparms.mem;
+	free_parms.tbl_scope_id	= iparms.tbl_scope_id;
+	free_parms.flow_handle	= iparms.flow_handle;
+
+	trc = tf_delete_em_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to delete EM entry on failed add\n");
+
+	return rc;
+}
+
+static int32_t
+ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
+			     struct bnxt_ulp_mapper_class_tbl_info *tbl)
+{
+	struct bnxt_ulp_mapper_result_field_info *flds;
+	struct ulp_flow_db_res_params	fid_parms;
+	struct ulp_blob	data;
+	uint64_t idx;
+	uint16_t tmplen;
+	uint32_t i, num_flds;
+	int32_t rc = 0, trc = 0;
+	struct tf_alloc_tbl_entry_parms	aparms = { 0 };
+	struct tf_set_tbl_entry_parms	sparms = { 0 };
+	struct tf_free_tbl_entry_parms	free_parms = { 0 };
+
+	struct tf *tfp = bnxt_ulp_cntxt_tfp_get(parms->ulp_ctx);
+
+	if (!ulp_blob_init(&data, tbl->result_bit_size, parms->order)) {
+		BNXT_TF_DBG(ERR, "Failed initial index table blob\n");
+		return -EINVAL;
+	}
+
+	flds = ulp_mapper_result_fields_get(tbl, &num_flds);
+	if (!flds || !num_flds) {
+		BNXT_TF_DBG(ERR, "Template undefined for action\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < num_flds; i++) {
+		rc = ulp_mapper_result_field_process(parms,
+						     &flds[i],
+						     &data);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "data field failed\n");
+			return rc;
+		}
+	}
+
+	aparms.dir		= tbl->direction;
+	aparms.type		= tbl->table_type;
+	aparms.search_enable	= tbl->srch_b4_alloc;
+	aparms.result		= ulp_blob_data_get(&data, &tmplen);
+	aparms.result_sz_in_bytes = ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+
+	/* All failures after the alloc succeeds require a free */
+	rc = tf_alloc_tbl_entry(tfp, &aparms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Alloc table[%d][%s] failed rc=%d\n",
+			    tbl->table_type,
+			    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+			    rc);
+		return rc;
+	}
+
+	/* Always storing values in Regfile in BE */
+	idx = tfp_cpu_to_be_64(aparms.idx);
+	rc = ulp_regfile_write(parms->regfile, tbl->regfile_wr_idx, idx);
+	if (!rc) {
+		BNXT_TF_DBG(ERR, "Write regfile[%d] failed\n",
+			    tbl->regfile_wr_idx);
+		goto error;
+	}
+
+	if (!tbl->srch_b4_alloc) {
+		sparms.dir		= tbl->direction;
+		sparms.type		= tbl->table_type;
+		sparms.data		= ulp_blob_data_get(&data, &tmplen);
+		sparms.data_sz_in_bytes =
+			ULP_SZ_BITS2BYTES(tbl->result_bit_size);
+		sparms.idx		= aparms.idx;
+
+		rc = tf_set_tbl_entry(tfp, &sparms);
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Set table[%d][%s][%d] failed rc=%d\n",
+				    tbl->table_type,
+				    (tbl->direction == TF_DIR_RX) ? "RX" : "TX",
+				    sparms.idx,
+				    rc);
+
+			goto error;
+		}
+	}
+
+	/* Link the resource to the flow in the flow db */
+	memset(&fid_parms, 0, sizeof(fid_parms));
+	fid_parms.direction	= tbl->direction;
+	fid_parms.resource_func	= tbl->resource_func;
+	fid_parms.resource_type	= tbl->table_type;
+	fid_parms.resource_hndl	= aparms.idx;
+	fid_parms.critical_resource	= 0;
+
+	rc = ulp_flow_db_resource_add(parms->ulp_ctx,
+				      parms->tbl_idx,
+				      parms->fid,
+				      &fid_parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to link resource to flow rc = %d\n",
+			    rc);
+		goto error;
+	}
+
+	return rc;
+error:
+	/*
+	 * Free the allocated resource since we failed to either
+	 * write to the entry or link the flow
+	 */
+	free_parms.dir	= tbl->direction;
+	free_parms.type	= tbl->table_type;
+	free_parms.idx	= aparms.idx;
+
+	trc = tf_free_tbl_entry(tfp, &free_parms);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free tbl entry on failure\n");
+
+	return rc;
+}
+
 /*
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
@@ -362,3 +1097,48 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+/* Create the classifier table entries for a flow. */
+int32_t
+ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
+{
+	uint32_t	i;
+	int32_t		rc = 0;
+
+	if (!parms)
+		return -EINVAL;
+
+	if (!parms->ctbls || !parms->num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for template[%d][%d].\n",
+			    parms->dev_id, parms->class_tid);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < parms->num_ctbls; i++) {
+		struct bnxt_ulp_mapper_class_tbl_info *tbl = &parms->ctbls[i];
+
+		switch (tbl->resource_func) {
+		case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+			rc = ulp_mapper_tcam_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+			rc = ulp_mapper_em_tbl_process(parms, tbl);
+			break;
+		case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+			rc = ulp_mapper_index_tbl_process(parms, tbl);
+			break;
+		default:
+			BNXT_TF_DBG(ERR, "Unexpected class resource %d\n",
+				    tbl->resource_func);
+			return -EINVAL;
+		}
+
+		if (rc) {
+			BNXT_TF_DBG(ERR, "Resource type %d failed\n",
+				    tbl->resource_func);
+			return rc;
+		}
+	}
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index adbcec2..2221e12 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -15,6 +15,8 @@
 #include "bnxt_ulp.h"
 #include "ulp_utils.h"
 
+#define ULP_SZ_BITS2BYTES(x) (((x) + 7) / 8)
+
 /* Internal Structure for passing the arguments around */
 struct bnxt_ulp_mapper_parms {
 	uint32_t				dev_id;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 9e4307e..837064e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -6,14 +6,71 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_log.h>
+#include "bnxt.h"
 #include "bnxt_ulp.h"
 #include "tf_ext_flow_handle.h"
 #include "ulp_mark_mgr.h"
 #include "bnxt_tf_common.h"
-#include "../bnxt.h"
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+static inline uint32_t
+ulp_mark_db_idx_get(bool is_gfid, uint32_t fid, struct bnxt_ulp_mark_tbl *mtbl)
+{
+	uint32_t idx = 0, hashtype = 0;
+
+	if (is_gfid) {
+		TF_GET_HASH_TYPE_FROM_GFID(fid, hashtype);
+		TF_GET_HASH_INDEX_FROM_GFID(fid, idx);
+
+		/* Need to truncate anything beyond supported flows */
+		idx &= mtbl->gfid_mask;
+
+		if (hashtype)
+			idx |= mtbl->gfid_type_bit;
+	} else {
+		idx = fid;
+	}
+
+	return idx;
+}
+
+static int32_t
+ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t mark)
+{
+	struct		bnxt_ulp_mark_tbl *mtbl;
+	uint32_t	idx = 0;
+
+	if (!ctxt) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark DB\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+
+		mtbl->gfid_tbl[idx].mark_id = mark;
+		mtbl->gfid_tbl[idx].valid = true;
+	} else {
+		/* For the LFID, the FID is used as the index */
+		mtbl->lfid_tbl[fid].mark_id = mark;
+		mtbl->lfid_tbl[fid].valid = true;
+	}
+
+	return 0;
+}
+
 /*
  * Allocate and Initialize all Mark Manager resources for this ulp context.
  *
@@ -117,3 +174,24 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 
 	return 0;
 }
+
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 5948683..18abea4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -54,4 +54,22 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt);
 int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
+/*
+ * Adds a Mark to the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index fc77800..aefece8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -9,6 +9,7 @@
  */
 
 #include "ulp_template_db.h"
+#include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
 
 uint32_t ulp_act_prop_map_table[] = {
@@ -109,6 +110,834 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_DMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_DMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD,
+	.mask_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_SVIF_INDEX >> 8) & 0xff,
+		BNXT_ULP_HF0_SVIF_INDEX & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 12,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_TUN_HDR_TYPE_NONE,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L3_HDR_TYPE_IPV4,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_L2_HDR_TYPE_DIX,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_PKT_TYPE_L2,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+		0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 251,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_ADD_PAD,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_DST_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_DST_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_UDP_SRC_PORT >> 8) & 0xff,
+		BNXT_ULP_HF0_O_UDP_SRC_PORT & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {BNXT_ULP_SYM_IP_PROTO_UDP,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_DST_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_DST_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 32,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_IPV4_SRC_ADDR >> 8) & 0xff,
+		BNXT_ULP_HF0_O_IPV4_SRC_ADDR & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 48,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD,
+	.spec_operand = {(BNXT_ULP_HF0_O_ETH_SMAC >> 8) & 0xff,
+		BNXT_ULP_HF0_O_ETH_SMAC & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 24,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_REGFILE,
+	.spec_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.mask_opcode = BNXT_ULP_MASK_OPC_SET_TO_CONSTANT,
+	.mask_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+	.spec_opcode = BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT,
+	.spec_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 7,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 6,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 3,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 16,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 4,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 10,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00fd >> 8) & 0xff,
+		0x00fd & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x15, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 8,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 33,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_REGFILE,
+	.result_operand = {(BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN >> 8) & 0xff,
+		BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 5,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 9,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {(0x00c5 >> 8) & 0xff,
+		0x00c5 & 0xff,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 11,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 2,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	},
+	{
+	.field_bit_size = 1,
+	.result_opcode = BNXT_ULP_RESULT_OPC_SET_TO_CONSTANT,
+	.result_operand = {0x01, 0x00, 0x00, 0x00, 0x00, 0x00,
+		0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}
+	}
+};
+
+struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_L2_CTXT,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0,
+	.ident_bit_size = 10,
+	.ident_bit_pos = 54
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_IDENTIFIER,
+	.ident_type = TF_IDENT_TYPE_EM_PROF,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0,
+	.ident_bit_size = 8,
+	.ident_bit_pos = 2
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index e52cc3f..733836a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,37 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 
+enum bnxt_ulp_action_bit {
+	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
+	BNXT_ULP_ACTION_BIT_DROP             = 0x0000000000000002,
+	BNXT_ULP_ACTION_BIT_COUNT            = 0x0000000000000004,
+	BNXT_ULP_ACTION_BIT_RSS              = 0x0000000000000008,
+	BNXT_ULP_ACTION_BIT_METER            = 0x0000000000000010,
+	BNXT_ULP_ACTION_BIT_VNIC             = 0x0000000000000020,
+	BNXT_ULP_ACTION_BIT_VPORT            = 0x0000000000000040,
+	BNXT_ULP_ACTION_BIT_VXLAN_DECAP      = 0x0000000000000080,
+	BNXT_ULP_ACTION_BIT_NVGRE_DECAP      = 0x0000000000000100,
+	BNXT_ULP_ACTION_BIT_OF_POP_MPLS      = 0x0000000000000200,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_MPLS     = 0x0000000000000400,
+	BNXT_ULP_ACTION_BIT_MAC_SWAP         = 0x0000000000000800,
+	BNXT_ULP_ACTION_BIT_SET_MAC_SRC      = 0x0000000000001000,
+	BNXT_ULP_ACTION_BIT_SET_MAC_DST      = 0x0000000000002000,
+	BNXT_ULP_ACTION_BIT_OF_POP_VLAN      = 0x0000000000004000,
+	BNXT_ULP_ACTION_BIT_OF_PUSH_VLAN     = 0x0000000000008000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_PCP  = 0x0000000000010000,
+	BNXT_ULP_ACTION_BIT_OF_SET_VLAN_VID  = 0x0000000000020000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_SRC     = 0x0000000000040000,
+	BNXT_ULP_ACTION_BIT_SET_IPV4_DST     = 0x0000000000080000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_SRC     = 0x0000000000100000,
+	BNXT_ULP_ACTION_BIT_SET_IPV6_DST     = 0x0000000000200000,
+	BNXT_ULP_ACTION_BIT_DEC_TTL          = 0x0000000000400000,
+	BNXT_ULP_ACTION_BIT_SET_TP_SRC       = 0x0000000000800000,
+	BNXT_ULP_ACTION_BIT_SET_TP_DST       = 0x0000000001000000,
+	BNXT_ULP_ACTION_BIT_VXLAN_ENCAP      = 0x0000000002000000,
+	BNXT_ULP_ACTION_BIT_NVGRE_ENCAP      = 0x0000000004000000,
+	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -35,8 +66,48 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_mark_enable {
+	BNXT_ULP_MARK_ENABLE_NO = 0,
+	BNXT_ULP_MARK_ENABLE_YES = 1,
+	BNXT_ULP_MARK_ENABLE_LAST = 2
+};
+
+enum bnxt_ulp_mask_opc {
+	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_MASK_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_MASK_OPC_ADD_PAD = 3,
+	BNXT_ULP_MASK_OPC_LAST = 4
+};
+
+enum bnxt_ulp_priority {
+	BNXT_ULP_PRIORITY_LEVEL_0 = 0,
+	BNXT_ULP_PRIORITY_LEVEL_1 = 1,
+	BNXT_ULP_PRIORITY_LEVEL_2 = 2,
+	BNXT_ULP_PRIORITY_LEVEL_3 = 3,
+	BNXT_ULP_PRIORITY_LEVEL_4 = 4,
+	BNXT_ULP_PRIORITY_LEVEL_5 = 5,
+	BNXT_ULP_PRIORITY_LEVEL_6 = 6,
+	BNXT_ULP_PRIORITY_LEVEL_7 = 7,
+	BNXT_ULP_PRIORITY_NOT_USED = 8,
+	BNXT_ULP_PRIORITY_LAST = 9
+};
+
 enum bnxt_ulp_regfile_index {
-	BNXT_ULP_REGFILE_INDEX_LAST
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_0 = 0,
+	BNXT_ULP_REGFILE_INDEX_L2_CNTXT_ID_1 = 1,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_0 = 2,
+	BNXT_ULP_REGFILE_INDEX_PROF_FUNC_ID_1 = 3,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_0 = 4,
+	BNXT_ULP_REGFILE_INDEX_EM_PROFILE_ID_1 = 5,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_0 = 6,
+	BNXT_ULP_REGFILE_INDEX_WC_PROFILE_ID_1 = 7,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN = 8,
+	BNXT_ULP_REGFILE_INDEX_ACTION_PTR_0 = 9,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_0 = 10,
+	BNXT_ULP_REGFILE_INDEX_ENCAP_PTR_1 = 11,
+	BNXT_ULP_REGFILE_INDEX_NOT_USED = 12,
+	BNXT_ULP_REGFILE_INDEX_LAST = 13
 };
 
 enum bnxt_ulp_resource_func {
@@ -56,9 +127,78 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_spec_opc {
+	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
+	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
+	BNXT_ULP_SPEC_OPC_SET_TO_REGFILE = 2,
+	BNXT_ULP_SPEC_OPC_ADD_PAD = 3,
+	BNXT_ULP_SPEC_OPC_LAST = 4
+};
+
 enum bnxt_ulp_sym {
+	BNXT_ULP_SYM_BIG_ENDIAN = 0,
 	BNXT_ULP_SYM_DECAP_FUNC_NONE = 0,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L2 = 11,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L3 = 12,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_L4 = 13,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL2 = 3,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL3 = 8,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TL4 = 9,
+	BNXT_ULP_SYM_DECAP_FUNC_THRU_TUN = 10,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_ECV_L3_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8847 = 6,
+	BNXT_ULP_SYM_ECV_L3_TYPE_MPLS_8848 = 7,
+	BNXT_ULP_SYM_ECV_L3_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP = 4,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_CSUM = 5,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY = 6,
+	BNXT_ULP_SYM_ECV_L4_TYPE_UDP_ENTROPY_CSUM = 7,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GENERIC = 1,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_GRE = 5,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NGE = 3,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NONE = 0,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_NVGRE = 4,
+	BNXT_ULP_SYM_ECV_TUN_TYPE_VXLAN = 2,
+	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_L2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC = 2,
+	BNXT_ULP_SYM_L2_HDR_TYPE_LLC_SNAP = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ARP = 2,
+	BNXT_ULP_SYM_L3_HDR_TYPE_EAPOL = 4,
+	BNXT_ULP_SYM_L3_HDR_TYPE_FCOE = 6,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_L3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_L3_HDR_TYPE_PTP = 3,
+	BNXT_ULP_SYM_L3_HDR_TYPE_ROCE = 5,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR1 = 7,
+	BNXT_ULP_SYM_L3_HDR_TYPE_UPAR2 = 8,
+	BNXT_ULP_SYM_L4_HDR_TYPE_BTH_V1 = 5,
+	BNXT_ULP_SYM_L4_HDR_TYPE_ICMP = 2,
+	BNXT_ULP_SYM_L4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR1 = 3,
+	BNXT_ULP_SYM_L4_HDR_TYPE_UPAR2 = 4,
 	BNXT_ULP_SYM_LITTLE_ENDIAN = 1,
+	BNXT_ULP_SYM_NO = 0,
+	BNXT_ULP_SYM_PKT_TYPE_L2 = 0,
+	BNXT_ULP_SYM_TL2_HDR_TYPE_DIX = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV4 = 0,
+	BNXT_ULP_SYM_TL3_HDR_TYPE_IPV6 = 1,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_TCP = 0,
+	BNXT_ULP_SYM_TL4_HDR_TYPE_UDP = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GENEVE = 1,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_GRE = 3,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV4 = 4,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_IPV6 = 5,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_MPLS = 7,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NONE = 15,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_NVGRE = 2,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_PPPOE = 6,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR1 = 8,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_UPAR2 = 9,
+	BNXT_ULP_SYM_TUN_HDR_TYPE_VXLAN = 0,
 	BNXT_ULP_SYM_YES = 1
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
new file mode 100644
index 0000000..1bc4449
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved_
+ */
+
+/* date: Mon Mar  9 02:37:53 2020
+ * version: 0_0
+ */
+
+#ifndef _ULP_HDR_FIELD_ENUMS_H_
+#define _ULP_HDR_FIELD_ENUMS_H_
+
+enum bnxt_ulp_hf0 {
+	BNXT_ULP_HF0_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF0_O_VTAG_NUM = 1,
+	BNXT_ULP_HF0_I_VTAG_NUM = 2,
+	BNXT_ULP_HF0_SVIF_INDEX = 3,
+	BNXT_ULP_HF0_O_ETH_DMAC = 4,
+	BNXT_ULP_HF0_O_ETH_SMAC = 5,
+	BNXT_ULP_HF0_O_ETH_TYPE = 6,
+	BNXT_ULP_HF0_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF0_OO_VLAN_VID = 8,
+	BNXT_ULP_HF0_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF0_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF0_OI_VLAN_VID = 11,
+	BNXT_ULP_HF0_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF0_O_IPV4_VER = 13,
+	BNXT_ULP_HF0_O_IPV4_TOS = 14,
+	BNXT_ULP_HF0_O_IPV4_LEN = 15,
+	BNXT_ULP_HF0_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF0_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF0_O_IPV4_TTL = 18,
+	BNXT_ULP_HF0_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF0_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF0_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF0_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF0_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF0_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF0_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF0_O_UDP_CSUM = 26,
+	BNXT_ULP_HF0_T_VXLAN_FLAGS = 27,
+	BNXT_ULP_HF0_T_VXLAN_RSVD0 = 28,
+	BNXT_ULP_HF0_T_VXLAN_VNI = 29,
+	BNXT_ULP_HF0_T_VXLAN_RSVD1 = 30,
+	BNXT_ULP_HF0_I_ETH_DMAC = 31,
+	BNXT_ULP_HF0_I_ETH_SMAC = 32,
+	BNXT_ULP_HF0_I_ETH_TYPE = 33,
+	BNXT_ULP_HF0_IO_VLAN_CFI_PRI = 34,
+	BNXT_ULP_HF0_IO_VLAN_VID = 35,
+	BNXT_ULP_HF0_IO_VLAN_TYPE = 36,
+	BNXT_ULP_HF0_II_VLAN_CFI_PRI = 37,
+	BNXT_ULP_HF0_II_VLAN_VID = 38,
+	BNXT_ULP_HF0_II_VLAN_TYPE = 39,
+	BNXT_ULP_HF0_I_IPV4_VER = 40,
+	BNXT_ULP_HF0_I_IPV4_TOS = 41,
+	BNXT_ULP_HF0_I_IPV4_LEN = 42,
+	BNXT_ULP_HF0_I_IPV4_FRAG_ID = 43,
+	BNXT_ULP_HF0_I_IPV4_FRAG_OFF = 44,
+	BNXT_ULP_HF0_I_IPV4_TTL = 45,
+	BNXT_ULP_HF0_I_IPV4_NEXT_PID = 46,
+	BNXT_ULP_HF0_I_IPV4_CSUM = 47,
+	BNXT_ULP_HF0_I_IPV4_SRC_ADDR = 48,
+	BNXT_ULP_HF0_I_IPV4_DST_ADDR = 49,
+	BNXT_ULP_HF0_I_UDP_SRC_PORT = 50,
+	BNXT_ULP_HF0_I_UDP_DST_PORT = 51,
+	BNXT_ULP_HF0_I_UDP_LENGTH = 52,
+	BNXT_ULP_HF0_I_UDP_CSUM = 53
+};
+
+enum bnxt_ulp_hf1 {
+	BNXT_ULP_HF1_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF1_O_VTAG_NUM = 1,
+	BNXT_ULP_HF1_I_VTAG_NUM = 2,
+	BNXT_ULP_HF1_SVIF_INDEX = 3,
+	BNXT_ULP_HF1_O_ETH_DMAC = 4,
+	BNXT_ULP_HF1_O_ETH_SMAC = 5,
+	BNXT_ULP_HF1_O_ETH_TYPE = 6,
+	BNXT_ULP_HF1_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF1_OO_VLAN_VID = 8,
+	BNXT_ULP_HF1_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF1_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF1_OI_VLAN_VID = 11,
+	BNXT_ULP_HF1_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF1_O_IPV4_VER = 13,
+	BNXT_ULP_HF1_O_IPV4_TOS = 14,
+	BNXT_ULP_HF1_O_IPV4_LEN = 15,
+	BNXT_ULP_HF1_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF1_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF1_O_IPV4_TTL = 18,
+	BNXT_ULP_HF1_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF1_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF1_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF1_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF1_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF1_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF1_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF1_O_UDP_CSUM = 26
+};
+
+enum bnxt_ulp_hf2 {
+	BNXT_ULP_HF2_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HF2_O_VTAG_NUM = 1,
+	BNXT_ULP_HF2_I_VTAG_NUM = 2,
+	BNXT_ULP_HF2_SVIF_INDEX = 3,
+	BNXT_ULP_HF2_O_ETH_DMAC = 4,
+	BNXT_ULP_HF2_O_ETH_SMAC = 5,
+	BNXT_ULP_HF2_O_ETH_TYPE = 6,
+	BNXT_ULP_HF2_OO_VLAN_CFI_PRI = 7,
+	BNXT_ULP_HF2_OO_VLAN_VID = 8,
+	BNXT_ULP_HF2_OO_VLAN_TYPE = 9,
+	BNXT_ULP_HF2_OI_VLAN_CFI_PRI = 10,
+	BNXT_ULP_HF2_OI_VLAN_VID = 11,
+	BNXT_ULP_HF2_OI_VLAN_TYPE = 12,
+	BNXT_ULP_HF2_O_IPV4_VER = 13,
+	BNXT_ULP_HF2_O_IPV4_TOS = 14,
+	BNXT_ULP_HF2_O_IPV4_LEN = 15,
+	BNXT_ULP_HF2_O_IPV4_FRAG_ID = 16,
+	BNXT_ULP_HF2_O_IPV4_FRAG_OFF = 17,
+	BNXT_ULP_HF2_O_IPV4_TTL = 18,
+	BNXT_ULP_HF2_O_IPV4_NEXT_PID = 19,
+	BNXT_ULP_HF2_O_IPV4_CSUM = 20,
+	BNXT_ULP_HF2_O_IPV4_SRC_ADDR = 21,
+	BNXT_ULP_HF2_O_IPV4_DST_ADDR = 22,
+	BNXT_ULP_HF2_O_UDP_SRC_PORT = 23,
+	BNXT_ULP_HF2_O_UDP_DST_PORT = 24,
+	BNXT_ULP_HF2_O_UDP_LENGTH = 25,
+	BNXT_ULP_HF2_O_UDP_CSUM = 26
+};
+
+#endif /* _ULP_HDR_FIELD_ENUMS_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 2b0a3d7..e28d049 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,9 +17,21 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Structure to store the protocol fields */
+#define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
+struct ulp_rte_hdr_field {
+	uint8_t		spec[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint8_t		mask[RTE_PARSER_FLOW_HDR_FIELD_SIZE];
+	uint32_t	size;
+};
+
+struct ulp_rte_act_bitmap {
+	uint64_t	bits;
+};
+
 /*
- * structure to hold the action property details
- * It is a array of 128 bytes
+ * Structure to hold the action property details.
+ * It is a array of 128 bytes.
  */
 struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
@@ -39,6 +51,35 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+struct bnxt_ulp_mapper_class_tbl_info {
+	enum bnxt_ulp_resource_func	resource_func;
+	uint32_t	table_type;
+	uint8_t		direction;
+	uint8_t		mem;
+	uint32_t	priority;
+	uint8_t		srch_b4_alloc;
+	uint32_t	critical_resource;
+
+	/* Information for accessing the ulp_key_field_list */
+	uint32_t	key_start_idx;
+	uint16_t	key_bit_size;
+	uint16_t	key_num_fields;
+	/* Size of the blob that holds the key */
+	uint16_t	blob_key_bit_size;
+
+	/* Information for accessing the ulp_class_result_field_list */
+	uint32_t	result_start_idx;
+	uint16_t	result_bit_size;
+	uint16_t	result_num_fields;
+
+	/* Information for accessing the ulp_ident_list */
+	uint32_t	ident_start_idx;
+	uint16_t	ident_nums;
+
+	uint8_t		mark_enable;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
 struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	enum tf_tbl_type table_type;
@@ -52,6 +93,15 @@ struct bnxt_ulp_mapper_act_tbl_info {
 	enum bnxt_ulp_regfile_index	regfile_wr_idx;
 };
 
+struct bnxt_ulp_mapper_class_key_field_info {
+	uint8_t			name[64];
+	enum bnxt_ulp_mask_opc	mask_opcode;
+	enum bnxt_ulp_spec_opc	spec_opcode;
+	uint16_t		field_bit_size;
+	uint8_t			mask_operand[16];
+	uint8_t			spec_operand[16];
+};
+
 struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				name[64];
 	enum bnxt_ulp_result_opc	result_opcode;
@@ -59,14 +109,36 @@ struct bnxt_ulp_mapper_result_field_info {
 	uint8_t				result_operand[16];
 };
 
+struct bnxt_ulp_mapper_ident_info {
+	uint8_t		name[64];
+	uint32_t	resource_func;
+
+	uint16_t	ident_type;
+	uint16_t	ident_bit_size;
+	uint16_t	ident_bit_pos;
+	enum bnxt_ulp_regfile_index	regfile_wr_idx;
+};
+
+/*
+ * Flow Mapper Static Data Externs:
+ * Access to the below static data should be done through access functions and
+ * directly throughout the code.
+ */
+
 /*
- * The ulp_device_params is indexed by the dev_id
- * This table maintains the device specific parameters
+ * The ulp_device_params is indexed by the dev_id.
+ * This table maintains the device specific parameters.
  */
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
  * The ulp_data_field_list provides the instructions for creating an action
+ * records such as tcam/em results.
+ */
+extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
+
+/*
+ * The ulp_data_field_list provides the instructions for creating an action
  * record.  It uses the same structure as the result list, but is only used for
  * actions.
  */
@@ -75,6 +147,19 @@ struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[];
 
 /*
  * The ulp_act_prop_map_table provides the mapping to index and size of action
+ * tcam and em tables.
+ */
+extern
+struct bnxt_ulp_mapper_class_key_field_info	ulp_class_key_field_list[];
+
+/*
+ * The ulp_ident_list provides the instructions for creating identifiers such
+ * as profile ids.
+ */
+extern struct bnxt_ulp_mapper_ident_info	ulp_ident_list[];
+
+/*
+ * The ulp_act_prop_map_table provides the mapping to index and size of action
  * properties.
  */
 extern uint32_t ulp_act_prop_map_table[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 21/34] net/bnxt: add support to free key and action tables
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (19 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
                         ` (15 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Kishore Padmanabha, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets all the flow resources from the flow id
2. Frees all the table resources
3. Frees the flow in the flow table

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c  | 199 ++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h  |  30 +++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c   | 193 ++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_mapper.h   |  13 +++
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  23 +++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 +++
 6 files changed, 474 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index 6e73f25..eecee6b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -23,6 +23,32 @@
 #define ULP_FLOW_DB_RES_NXT_RESET(dst)	((dst) &= ~(ULP_FLOW_DB_RES_NXT_MASK))
 
 /*
+ * Helper function to set the bit in the active flow table
+ * No validation is done in this function.
+ *
+ * flow_tbl [in] Ptr to flow table
+ * idx [in] The index to bit to be set or reset.
+ * flag [in] 1 to set and 0 to reset.
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_active_flow_set(struct bnxt_ulp_flow_tbl	*flow_tbl,
+			    uint32_t			idx,
+			    uint32_t			flag)
+{
+	uint32_t		active_index;
+
+	active_index = idx / ULP_INDEX_BITMAP_SIZE;
+	if (flag)
+		ULP_INDEX_BITMAP_SET(flow_tbl->active_flow_tbl[active_index],
+				     idx);
+	else
+		ULP_INDEX_BITMAP_RESET(flow_tbl->active_flow_tbl[active_index],
+				       idx);
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  *  is set.No validation being done in this function.
  *
@@ -71,6 +97,35 @@ ulp_flow_db_res_params_to_info(struct ulp_fdb_resource_info   *resource_info,
 }
 
 /*
+ * Helper function to copy the resource params to resource info
+ *  No validation being done in this function.
+ *
+ * resource_info [in] Ptr to resource information
+ * params [out] The output params to the caller
+ *
+ * returns none
+ */
+static void
+ulp_flow_db_res_info_to_params(struct ulp_fdb_resource_info   *resource_info,
+			       struct ulp_flow_db_res_params  *params)
+{
+	memset(params, 0, sizeof(struct ulp_flow_db_res_params));
+	params->direction = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_DIR_MASK) >>
+				 ULP_FLOW_DB_RES_DIR_BIT);
+	params->resource_func = ((resource_info->nxt_resource_idx &
+				 ULP_FLOW_DB_RES_FUNC_MASK) >>
+				 ULP_FLOW_DB_RES_FUNC_BITS);
+
+	if (params->resource_func != BNXT_ULP_RESOURCE_FUNC_EM_TABLE) {
+		params->resource_hndl = resource_info->resource_hndl;
+		params->resource_type = resource_info->resource_type;
+	} else {
+		params->resource_hndl = resource_info->resource_em_handle;
+	}
+}
+
+/*
  * Helper function to allocate the flow table and initialize
  * the stack for allocation operations.
  *
@@ -122,7 +177,7 @@ ulp_flow_db_alloc_resource(struct bnxt_ulp_flow_db *flow_db,
 }
 
 /*
- * Helper function to de allocate the flow table.
+ * Helper function to deallocate the flow table.
  *
  * flow_db [in] Ptr to flow database structure
  * tbl_idx [in] The index to table creation.
@@ -321,3 +376,145 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Onlythe critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+	struct ulp_fdb_resource_info	*nxt_resource, *fid_resource;
+	uint32_t			nxt_idx = 0;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for max flows */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+
+	fid_resource = &flow_tbl->flow_resources[fid];
+	if (!params->critical_resource) {
+		/* Not the critical resource so free the resource */
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		if (!nxt_idx) {
+			/* reached end of resources */
+			return -ENOENT;
+		}
+		nxt_resource = &flow_tbl->flow_resources[nxt_idx];
+
+		/* connect the fid resource to the next resource */
+		ULP_FLOW_DB_RES_NXT_RESET(fid_resource->nxt_resource_idx);
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_resource->nxt_resource_idx);
+
+		/* update the contents to be given to caller */
+		ulp_flow_db_res_info_to_params(nxt_resource, params);
+
+		/* Delete the nxt_resource */
+		memset(nxt_resource, 0, sizeof(struct ulp_fdb_resource_info));
+
+		/* add it to the free list */
+		flow_tbl->tail_index++;
+		if (flow_tbl->tail_index >= flow_tbl->num_resources) {
+			BNXT_TF_DBG(ERR, "FlowDB:Tail reached max\n");
+			return -ENOENT;
+		}
+		flow_tbl->flow_tbl_stack[flow_tbl->tail_index] = nxt_idx;
+
+	} else {
+		/* Critical resource. copy the contents and exit */
+		ulp_flow_db_res_info_to_params(fid_resource, params);
+		ULP_FLOW_DB_RES_NXT_SET(nxt_idx,
+					fid_resource->nxt_resource_idx);
+		memset(fid_resource, 0, sizeof(struct ulp_fdb_resource_info));
+		ULP_FLOW_DB_RES_NXT_SET(fid_resource->nxt_resource_idx,
+					nxt_idx);
+	}
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	if (tbl_idx >= BNXT_ULP_FLOW_TABLE_MAX) {
+		BNXT_TF_DBG(ERR, "Invalid table index\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+
+	/* check for limits of fid */
+	if (fid >= flow_tbl->num_flows || !fid) {
+		BNXT_TF_DBG(ERR, "Invalid flow index\n");
+		return -EINVAL;
+	}
+
+	/* check if the flow is active or not */
+	if (!ulp_flow_db_active_flow_is_set(flow_tbl, fid)) {
+		BNXT_TF_DBG(ERR, "flow does not exist\n");
+		return -EINVAL;
+	}
+	flow_tbl->head_index--;
+	if (!flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "FlowDB: Head Ptr is zero\n");
+		return -ENOENT;
+	}
+	flow_tbl->flow_tbl_stack[flow_tbl->head_index] = fid;
+	ulp_flow_db_active_flow_set(flow_tbl, fid, 0);
+
+	/* all good, return success */
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index f6055a5..20109b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -99,4 +99,34 @@ int32_t	ulp_flow_db_resource_add(struct bnxt_ulp_context	*ulp_ctxt,
 				 uint32_t			fid,
 				 struct ulp_flow_db_res_params	*params);
 
+/*
+ * Free the flow database entry.
+ * The params->critical_resource has to be set to 1 to free the first resource.
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ * params [in/out] The contents to be copied into params.
+ * Only the critical_resource needs to be set by the caller.
+ *
+ * Returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_resource_del(struct bnxt_ulp_context	*ulp_ctxt,
+				 enum bnxt_ulp_flow_db_tables	tbl_idx,
+				 uint32_t			fid,
+				 struct ulp_flow_db_res_params	*params);
+
+/*
+ * Free the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [in] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
+			     enum bnxt_ulp_flow_db_tables	tbl_idx,
+			     uint32_t				fid);
+
 #endif /* _ULP_FLOW_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index f378f8e..b3d981e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -143,6 +143,87 @@ ulp_mapper_ident_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
 	return &ulp_ident_list[idx];
 }
 
+static inline int32_t
+ulp_mapper_tcam_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			   struct tf *tfp,
+			   struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tcam_entry_parms fparms = {
+		.dir		= res->direction,
+		.tcam_tbl_type	= res->resource_type,
+		.idx		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_tcam_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp  __rte_unused,
+			    struct tf *tfp,
+			    struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_tbl_entry_parms fparms = {
+		.dir	= res->direction,
+		.type	= res->resource_type,
+		.idx	= (uint32_t)res->resource_hndl
+	};
+
+	return tf_free_tbl_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_eem_entry_free(struct bnxt_ulp_context *ulp,
+			  struct tf *tfp,
+			  struct ulp_flow_db_res_params *res)
+{
+	struct tf_delete_em_entry_parms fparms = { 0 };
+	int32_t rc;
+
+	fparms.dir		= res->direction;
+	fparms.mem		= TF_MEM_EXTERNAL;
+	fparms.flow_handle	= res->resource_hndl;
+
+	rc = bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get table scope\n");
+		return -EINVAL;
+	}
+
+	return tf_delete_em_entry(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_ident_free(struct bnxt_ulp_context *ulp __rte_unused,
+		      struct tf *tfp,
+		      struct ulp_flow_db_res_params *res)
+{
+	struct tf_free_identifier_parms fparms = {
+		.dir		= res->direction,
+		.ident_type	= res->resource_type,
+		.id		= (uint16_t)res->resource_hndl
+	};
+
+	return tf_free_identifier(tfp, &fparms);
+}
+
+static inline int32_t
+ulp_mapper_mark_free(struct bnxt_ulp_context *ulp,
+		     struct ulp_flow_db_res_params *res)
+{
+	uint32_t flag;
+	uint32_t fid;
+	uint32_t gfid;
+
+	fid	  = (uint32_t)res->resource_hndl;
+	TF_GET_FLAG_FROM_FLOW_ID(fid, flag);
+	TF_GET_GFID_FROM_FLOW_ID(fid, gfid);
+
+	return ulp_mark_db_mark_del(ulp,
+				    (flag == TF_GFID_TABLE_EXTERNAL),
+				    gfid,
+				    0);
+}
+
 static int32_t
 ulp_mapper_ident_process(struct bnxt_ulp_mapper_parms *parms,
 			 struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1142,3 +1223,115 @@ ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 
 	return rc;
 }
+
+static int32_t
+ulp_mapper_resource_free(struct bnxt_ulp_context *ulp,
+			 struct ulp_flow_db_res_params *res)
+{
+	struct tf *tfp;
+	int32_t	rc = 0;
+
+	if (!res || !ulp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource\n ");
+		return -EINVAL;
+	}
+
+	tfp = bnxt_ulp_cntxt_tfp_get(ulp);
+	if (!tfp) {
+		BNXT_TF_DBG(ERR, "Unable to free resource failed to get tfp\n");
+		return -EINVAL;
+	}
+
+	switch (res->resource_func) {
+	case BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE:
+		rc = ulp_mapper_tcam_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_EM_TABLE:
+		rc = ulp_mapper_eem_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE:
+		rc = ulp_mapper_index_entry_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_IDENTIFIER:
+		rc = ulp_mapper_ident_free(ulp, tfp, res);
+		break;
+	case BNXT_ULP_RESOURCE_FUNC_HW_FID:
+		rc = ulp_mapper_mark_free(ulp, res);
+		break;
+	default:
+		break;
+	}
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type)
+{
+	struct ulp_flow_db_res_params	res_parms = { 0 };
+	int32_t				rc, trc;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	/*
+	 * Set the critical resource on the first resource del, then iterate
+	 * while status is good
+	 */
+	res_parms.critical_resource = 1;
+	rc = ulp_flow_db_resource_del(ulp_ctx, tbl_type, fid, &res_parms);
+
+	if (rc) {
+		/*
+		 * This is unexpected on the first call to resource del.
+		 * It likely means that the flow did not exist in the flow db.
+		 */
+		BNXT_TF_DBG(ERR, "Flow[%d][0x%08x] failed to free (rc=%d)\n",
+			    tbl_type, fid, rc);
+		return rc;
+	}
+
+	while (!rc) {
+		trc = ulp_mapper_resource_free(ulp_ctx, &res_parms);
+		if (trc)
+			/*
+			 * On fail, we still need to attempt to free the
+			 * remaining resources.  Don't return
+			 */
+			BNXT_TF_DBG(ERR,
+				    "Flow[%d][0x%x] Res[%d][0x%016" PRIx64
+				    "] failed rc=%d.\n",
+				    tbl_type, fid, res_parms.resource_func,
+				    res_parms.resource_hndl, trc);
+
+		/* All subsequent call require the critical_resource be zero */
+		res_parms.critical_resource = 0;
+
+		rc = ulp_flow_db_resource_del(ulp_ctx,
+					      tbl_type,
+					      fid,
+					      &res_parms);
+	}
+
+	/* Free the Flow ID since we've removed all resources */
+	rc = ulp_flow_db_fid_free(ulp_ctx, tbl_type, fid);
+
+	return rc;
+}
+
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
+{
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid parms, unable to free flow\n");
+		return -EINVAL;
+	}
+
+	return ulp_mapper_resources_free(ulp_ctx,
+					 fid,
+					 BNXT_ULP_REGULAR_FLOW_TABLE);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 2221e12..8655728 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -38,4 +38,17 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/* Function that frees all resources associated with the flow. */
+int32_t
+ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
+
+/*
+ * Function that frees all resources and can be called on default or regular
+ * flows
+ */
+int32_t
+ulp_mapper_resources_free(struct bnxt_ulp_context	*ulp_ctx,
+			  uint32_t fid,
+			  enum bnxt_ulp_flow_db_tables	tbl_type);
+
 #endif /* _ULP_MAPPER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 837064e..566668e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -135,7 +135,7 @@ ulp_mark_db_init(struct bnxt_ulp_context *ctxt)
 		    mark_tbl->gfid_max,
 		    mark_tbl->gfid_mask);
 
-	/* Add the mart tbl to the ulp context. */
+	/* Add the mark tbl to the ulp context. */
 	bnxt_ulp_cntxt_ptr2_mark_db_set(ctxt, mark_tbl);
 
 	return 0;
@@ -195,3 +195,24 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 {
 	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, mark);
 }
+
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark  __rte_unused)
+{
+	return ulp_mark_db_mark_set(ctxt, is_gfid, gfid, ULP_MARK_INVALID);
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index 18abea4..f0d1515 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -72,4 +72,22 @@ ulp_mark_db_mark_add(struct bnxt_ulp_context *ctxt,
 		     uint32_t gfid,
 		     uint32_t mark);
 
+/*
+ * Removes a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [in] The mark to be associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_del(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t gfid,
+		     uint32_t mark);
+
 #endif /* _ULP_MARK_MGR_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 22/34] net/bnxt: add support to alloc and program key and act tbls
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (20 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
@ 2020-04-15  8:18       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
                         ` (14 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:18 UTC (permalink / raw)
  To: dev; +Cc: Mike Baucom, Venkat Duvvuru

From: Mike Baucom <michael.baucom@broadcom.com>

This patch does the following
1. Gets the action tables information from the action template id
2. Gets the class tables information from the class template id
3. Initializes the registry file
4. Allocates a flow id from the flow table
5. Process the class & action tables

Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c         |  37 +++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h         |  13 ++
 drivers/net/bnxt/tf_ulp/ulp_mapper.c          | 196 ++++++++++++++++++++++++--
 drivers/net/bnxt/tf_ulp/ulp_mapper.h          |  15 ++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     |  90 ++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  31 +++-
 7 files changed, 378 insertions(+), 11 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index eecee6b..ee703a1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -303,6 +303,43 @@ int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt)
 }
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid)
+{
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	*fid = 0; /* Initialize fid to invalid value */
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctxt);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+
+	flow_tbl = &flow_db->flow_tbl[tbl_idx];
+	/* check for max flows */
+	if (flow_tbl->num_flows <= flow_tbl->head_index) {
+		BNXT_TF_DBG(ERR, "Flow database has reached max flows\n");
+		return -ENOMEM;
+	}
+	*fid = flow_tbl->flow_tbl_stack[flow_tbl->head_index];
+	flow_tbl->head_index++;
+	ulp_flow_db_active_flow_set(flow_tbl, *fid, 1);
+
+	/* all good, return success */
+	return 0;
+}
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index 20109b9..eb5effa 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -84,6 +84,19 @@ int32_t	ulp_flow_db_init(struct bnxt_ulp_context *ulp_ctxt);
 int32_t	ulp_flow_db_deinit(struct bnxt_ulp_context *ulp_ctxt);
 
 /*
+ * Allocate the flow database entry
+ *
+ * ulp_ctxt [in] Ptr to ulp_context
+ * tbl_idx [in] Specify it is regular or default flow
+ * fid [out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+int32_t ulp_flow_db_fid_alloc(struct bnxt_ulp_context		*ulp_ctxt,
+			      enum bnxt_ulp_flow_db_tables	tbl_idx,
+			      uint32_t				*fid);
+
+/*
  * Allocate the flow database entry.
  * The params->critical_resource has to be set to 0 to allocate a new resource.
  *
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index b3d981e..a5fb1a3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -16,12 +16,6 @@
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 
-int32_t
-ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
-int32_t
-ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms);
-
 /*
  * Get the size of the action property for a given index.
  *
@@ -38,7 +32,76 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
 }
 
 /*
- * Get the list of result fields that implement the flow action
+ * Get the list of result fields that implement the flow action.
+ * Gets a device dependent list of tables that implement the action template id.
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The action template id that matches the flow
+ *
+ * num_tbls [out] The number of action tables in the returned array
+ *
+ * Returns An array of action tables to implement the flow, or NULL on error.
+ */
+static struct bnxt_ulp_mapper_act_tbl_info *
+ulp_mapper_action_tbl_list_get(uint32_t dev_id,
+			       uint32_t tid,
+			       uint32_t *num_tbls)
+{
+	uint32_t	idx;
+	uint32_t	tidx;
+
+	if (!num_tbls) {
+		BNXT_TF_DBG(ERR, "Invalid arguments\n");
+		return NULL;
+	}
+
+	/* template shift and device mask */
+	tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and act_tid
+	 */
+	idx		= ulp_act_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_act_tmpl_list[tidx].num_tbls;
+
+	return &ulp_act_tbl_list[idx];
+}
+
+/** Get a list of classifier tables that implement the flow
+ * Gets a device dependent list of tables that implement the class template id
+ *
+ * dev_id [in] The device id of the forwarding element
+ *
+ * tid [in] The template id that matches the flow
+ *
+ * num_tbls [out] The number of classifier tables in the returned array
+ *
+ * returns An array of classifier tables to implement the flow, or NULL on
+ * error
+ */
+static struct bnxt_ulp_mapper_class_tbl_info *
+ulp_mapper_class_tbl_list_get(uint32_t dev_id,
+			      uint32_t tid,
+			      uint32_t *num_tbls)
+{
+	uint32_t idx;
+	uint32_t tidx = ULP_DEVICE_PARAMS_INDEX(tid, dev_id);
+
+	if (!num_tbls)
+		return NULL;
+
+	/* NOTE: Need to have something from template compiler to help validate
+	 * range of dev_id and tid
+	 */
+	idx		= ulp_class_tmpl_list[tidx].start_tbl_idx;
+	*num_tbls	= ulp_class_tmpl_list[tidx].num_tbls;
+
+	return &ulp_class_tbl_list[idx];
+}
+
+/*
+ * Get the list of key fields that implement the flow.
  *
  * ctxt [in] The ulp context
  *
@@ -46,7 +109,7 @@ ulp_mapper_act_prop_size_get(uint32_t idx)
  *
  * num_flds [out] The number of key fields in the returned array
  *
- * returns array of Key fields, or NULL on error
+ * Returns array of Key fields, or NULL on error.
  */
 static struct bnxt_ulp_mapper_class_key_field_info *
 ulp_mapper_key_fields_get(struct bnxt_ulp_mapper_class_tbl_info *tbl,
@@ -1158,7 +1221,7 @@ ulp_mapper_index_tbl_process(struct bnxt_ulp_mapper_parms *parms,
  * Function to process the action template. Iterate through the list
  * action info templates and process it.
  */
-int32_t
+static int32_t
 ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1180,7 +1243,7 @@ ulp_mapper_action_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 }
 
 /* Create the classifier table entries for a flow. */
-int32_t
+static int32_t
 ulp_mapper_class_tbls_process(struct bnxt_ulp_mapper_parms *parms)
 {
 	uint32_t	i;
@@ -1335,3 +1398,116 @@ ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid)
 					 fid,
 					 BNXT_ULP_REGULAR_FLOW_TABLE);
 }
+
+/* Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context *ulp_ctx,
+		       uint32_t app_priority __rte_unused,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap __rte_unused,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act_bitmap,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t class_tid,
+		       uint32_t act_tid,
+		       uint32_t *flow_id)
+{
+	struct ulp_regfile		regfile;
+	struct bnxt_ulp_mapper_parms	parms;
+	struct bnxt_ulp_device_params	*device_params;
+	int32_t				rc, trc;
+
+	/* Initialize the parms structure */
+	memset(&parms, 0, sizeof(parms));
+	parms.act_prop = act_prop;
+	parms.act_bitmap = act_bitmap;
+	parms.regfile = &regfile;
+	parms.hdr_field = hdr_field;
+	parms.tfp = bnxt_ulp_cntxt_tfp_get(ulp_ctx);
+	parms.ulp_ctx = ulp_ctx;
+
+	/* Get the device id from the ulp context */
+	if (bnxt_ulp_cntxt_dev_id_get(ulp_ctx, &parms.dev_id)) {
+		BNXT_TF_DBG(ERR, "Invalid ulp context\n");
+		return -EINVAL;
+	}
+
+	/* Get the action table entry from device id and act context id */
+	parms.act_tid = act_tid;
+	parms.atbls = ulp_mapper_action_tbl_list_get(parms.dev_id,
+						     parms.act_tid,
+						     &parms.num_atbls);
+	if (!parms.atbls || !parms.num_atbls) {
+		BNXT_TF_DBG(ERR, "No action tables for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		return -EINVAL;
+	}
+
+	/* Get the class table entry from device id and act context id */
+	parms.class_tid = class_tid;
+	parms.ctbls = ulp_mapper_class_tbl_list_get(parms.dev_id,
+						    parms.class_tid,
+						    &parms.num_ctbls);
+	if (!parms.ctbls || !parms.num_ctbls) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+
+	/* Get the byte order for the further processing from device params */
+	device_params = bnxt_ulp_device_params_get(parms.dev_id);
+	if (!device_params) {
+		BNXT_TF_DBG(ERR, "No class tables for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		return -EINVAL;
+	}
+	parms.order = device_params->byte_order;
+	parms.encap_byte_swap = device_params->encap_byte_swap;
+
+	/* initialize the registry file for further processing */
+	if (!ulp_regfile_init(parms.regfile)) {
+		BNXT_TF_DBG(ERR, "regfile initialization failed.\n");
+		return -EINVAL;
+	}
+
+	/* Allocate a Flow ID for attaching all resources for the flow to.
+	 * Once allocated, all errors have to walk the list of resources and
+	 * free each of them.
+	 */
+	rc = ulp_flow_db_fid_alloc(ulp_ctx,
+				   BNXT_ULP_REGULAR_FLOW_TABLE,
+				   &parms.fid);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Unable to allocate flow table entry\n");
+		return rc;
+	}
+
+	/* Process the action template list from the selected action table*/
+	rc = ulp_mapper_action_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "action tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.act_tid);
+		goto flow_error;
+	}
+
+	/* All good. Now process the class template */
+	rc = ulp_mapper_class_tbls_process(&parms);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "class tables failed creation for %d:%d\n",
+			    parms.dev_id, parms.class_tid);
+		goto flow_error;
+	}
+
+	*flow_id = parms.fid;
+
+	return rc;
+
+flow_error:
+	/* Free all resources that were allocated during flow creation */
+	trc = ulp_mapper_flow_destroy(ulp_ctx, parms.fid);
+	if (trc)
+		BNXT_TF_DBG(ERR, "Failed to free all resources rc=%d\n", trc);
+
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index 8655728..5f3d46e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -38,6 +38,21 @@ struct bnxt_ulp_mapper_parms {
 	enum bnxt_ulp_flow_db_tables		tbl_idx;
 };
 
+/*
+ * Function to handle the mapping of the Flow to be compatible
+ * with the underlying hardware.
+ */
+int32_t
+ulp_mapper_flow_create(struct bnxt_ulp_context	*ulp_ctx,
+		       uint32_t		app_priority,
+		       struct ulp_rte_hdr_bitmap  *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop,
+		       uint32_t		class_tid,
+		       uint32_t		act_tid,
+		       uint32_t		*flow_id);
+
 /* Function that frees all resources associated with the flow. */
 int32_t
 ulp_mapper_flow_destroy(struct bnxt_ulp_context	*ulp_ctx, uint32_t fid);
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index aefece8..9d52937 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -110,6 +110,74 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 3,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_class_tbl_info ulp_class_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_L2_CTXT_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 0,
+	.blob_key_bit_size = 167,
+	.key_bit_size = 167,
+	.key_num_fields = 13,
+	.result_start_idx = 0,
+	.result_bit_size = 64,
+	.result_num_fields = 13,
+	.ident_start_idx = 0,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_TCAM_TABLE,
+	.table_type = TF_TCAM_TBL_TYPE_PROF_TCAM,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_LEVEL_0,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 13,
+	.blob_key_bit_size = 81,
+	.key_bit_size = 81,
+	.key_num_fields = 42,
+	.result_start_idx = 13,
+	.result_bit_size = 38,
+	.result_num_fields = 8,
+	.ident_start_idx = 1,
+	.ident_nums = 1,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_NO,
+	.critical_resource = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	},
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_EM_TABLE,
+	.table_type = TF_MEM_EXTERNAL,
+	.direction = TF_DIR_RX,
+	.priority = BNXT_ULP_PRIORITY_NOT_USED,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.key_start_idx = 55,
+	.blob_key_bit_size = 448,
+	.key_bit_size = 197,
+	.key_num_fields = 11,
+	.result_start_idx = 21,
+	.result_bit_size = 64,
+	.result_num_fields = 9,
+	.ident_start_idx = 2,
+	.ident_nums = 0,
+	.mark_enable = BNXT_ULP_MARK_ENABLE_YES,
+	.critical_resource = 1,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_NOT_USED
+	}
+};
+
 struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
 	{
 	.field_bit_size = 12,
@@ -938,6 +1006,28 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
+	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
+	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
+	.num_tbls = 1,
+	.start_tbl_idx = 0
+	}
+};
+
+struct bnxt_ulp_mapper_act_tbl_info ulp_act_tbl_list[] = {
+	{
+	.resource_func = BNXT_ULP_RESOURCE_FUNC_INDEX_TABLE,
+	.table_type = TF_TBL_TYPE_EXT,
+	.direction = TF_DIR_RX,
+	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
+	.result_start_idx = 0,
+	.result_bit_size = 128,
+	.result_num_fields = 26,
+	.encap_num_fields = 0,
+	.regfile_wr_idx = BNXT_ULP_REGFILE_INDEX_ACTION_PTR_MAIN
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_act_result_field_list[] = {
 	{
 	.field_bit_size = 14,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 733836a..957b21a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -12,6 +12,7 @@
 #define ULP_TEMPLATE_DB_H_
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
+#define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -127,6 +128,12 @@ enum bnxt_ulp_result_opc {
 	BNXT_ULP_RESULT_OPC_LAST = 4
 };
 
+enum bnxt_ulp_search_before_alloc {
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_NO = 0,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_YES = 1,
+	BNXT_ULP_SEARCH_BEFORE_ALLOC_LAST = 2
+};
+
 enum bnxt_ulp_spec_opc {
 	BNXT_ULP_SPEC_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_SPEC_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index e28d049..b7094c5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,10 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+struct ulp_rte_hdr_bitmap {
+	uint64_t	bits;
+};
+
 /* Structure to store the protocol fields */
 #define RTE_PARSER_FLOW_HDR_FIELD_SIZE		16
 struct ulp_rte_hdr_field {
@@ -51,6 +55,13 @@ struct bnxt_ulp_device_params {
 	uint32_t			num_resources_per_flow;
 };
 
+/* Flow Mapper */
+struct bnxt_ulp_mapper_tbl_list_info {
+	uint32_t	device_name;
+	uint32_t	start_tbl_idx;
+	uint32_t	num_tbls;
+};
+
 struct bnxt_ulp_mapper_class_tbl_info {
 	enum bnxt_ulp_resource_func	resource_func;
 	uint32_t	table_type;
@@ -132,7 +143,25 @@ struct bnxt_ulp_mapper_ident_info {
 extern struct bnxt_ulp_device_params ulp_device_params[];
 
 /*
- * The ulp_data_field_list provides the instructions for creating an action
+ * The ulp_class_tmpl_list and ulp_act_tmpl_list are indexed by the dev_id
+ * and template id (either class or action) returned by the matcher.
+ * The result provides the start index and number of entries in the connected
+ * ulp_class_tbl_list/ulp_act_tbl_list.
+ */
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_class_tmpl_list[];
+extern struct bnxt_ulp_mapper_tbl_list_info	ulp_act_tmpl_list[];
+
+/*
+ * The ulp_class_tbl_list and ulp_act_tbl_list are indexed based on the results
+ * of the template lists.  Each entry describes the high level details of the
+ * table entry to include the start index and number of instructions in the
+ * field lists.
+ */
+extern struct bnxt_ulp_mapper_class_tbl_info	ulp_class_tbl_list[];
+extern struct bnxt_ulp_mapper_act_tbl_info	ulp_act_tbl_list[];
+
+/*
+ * The ulp_class_result_field_list provides the instructions for creating result
  * records such as tcam/em results.
  */
 extern struct bnxt_ulp_mapper_result_field_info	ulp_class_result_field_list[];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 23/34] net/bnxt: match rte flow items with flow template patterns
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (21 preceding siblings ...)
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
                         ` (13 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes hdr_bitmap generated from the rte_flow_items
2. Iterates through the static hdr_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |  12 ++
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 152 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  26 +++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 115 +++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  40 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  21 ++++
 7 files changed, 367 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index f464d9e..455fd5c 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -63,6 +63,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_flow_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index 3516df4..e4ebfc5 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -25,6 +25,18 @@
 #define	BNXT_ULP_TX_NUM_FLOWS			32
 #define	BNXT_ULP_TX_TBL_IF_ID			0
 
+enum bnxt_tf_rc {
+	BNXT_TF_RC_PARSE_ERR	= -2,
+	BNXT_TF_RC_ERROR	= -1,
+	BNXT_TF_RC_SUCCESS	= 0
+};
+
+/* ulp direction Type */
+enum ulp_direction_type {
+	ULP_DIR_INGRESS,
+	ULP_DIR_EGRESS,
+};
+
 struct bnxt_ulp_mark_tbl *
 bnxt_ulp_cntxt_ptr2_mark_db_get(struct bnxt_ulp_context *ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
new file mode 100644
index 0000000..f367e4c
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "ulp_matcher.h"
+#include "ulp_utils.h"
+
+/* Utility function to check if bitmap is zero */
+static inline
+int ulp_field_mask_is_zero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is all ones */
+static inline int
+ulp_field_mask_is_ones(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0xFF)
+			return 0;
+		bitmap++;
+	}
+	return 1;
+}
+
+/* Utility function to check if bitmap is non zero */
+static inline int
+ulp_field_mask_notzero(uint8_t *bitmap, uint32_t size)
+{
+	while (size-- > 0) {
+		if (*bitmap != 0)
+			return 1;
+		bitmap++;
+	}
+	return 0;
+}
+
+/* Utility function to mask the computed and internal proto headers. */
+static void
+ulp_matcher_hdr_fields_normalize(struct ulp_rte_hdr_bitmap *hdr1,
+				 struct ulp_rte_hdr_bitmap *hdr2)
+{
+	/* copy the contents first */
+	rte_memcpy(hdr2, hdr1, sizeof(struct ulp_rte_hdr_bitmap));
+
+	/* reset the computed fields */
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_SVIF);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_O_L4);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L3);
+	ULP_BITMAP_RESET(hdr2->bits, BNXT_ULP_HDR_BIT_I_L4);
+}
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type   dir,
+			  struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			  struct ulp_rte_hdr_field  *hdr_field,
+			  struct ulp_rte_act_bitmap *act_bitmap,
+			  uint32_t		    *class_id)
+{
+	struct bnxt_ulp_header_match_info	*sel_hdr_match;
+	uint32_t				hdr_num, idx, jdx;
+	uint32_t				match = 0;
+	struct ulp_rte_hdr_bitmap		hdr_bitmap_masked;
+	uint32_t				start_idx;
+	struct ulp_rte_hdr_field		*m_field;
+	struct bnxt_ulp_matcher_field_info	*sf;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_hdr_match = ulp_ingress_hdr_match_list;
+		hdr_num = BNXT_ULP_INGRESS_HDR_MATCH_SZ;
+	} else {
+		sel_hdr_match = ulp_egress_hdr_match_list;
+		hdr_num = BNXT_ULP_EGRESS_HDR_MATCH_SZ;
+	}
+
+	/* Remove the hdr bit maps that are internal or computed */
+	ulp_matcher_hdr_fields_normalize(hdr_bitmap, &hdr_bitmap_masked);
+
+	/* Loop through the list of class templates to find the match */
+	for (idx = 0; idx < hdr_num; idx++, sel_hdr_match++) {
+		if (ULP_BITSET_CMP(&sel_hdr_match->hdr_bitmap,
+				   &hdr_bitmap_masked)) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Pattern Match failed template=%d\n",
+				    idx);
+			continue;
+		}
+		match = ULP_BITMAP_ISSET(act_bitmap->bits,
+					 BNXT_ULP_ACTION_BIT_VNIC);
+		if (match != sel_hdr_match->act_vnic) {
+			/* no match found */
+			BNXT_TF_DBG(DEBUG, "Vnic Match failed template=%d\n",
+				    idx);
+			continue;
+		} else {
+			match = 1;
+		}
+
+		/* Found a matching hdr bitmap, match the fields next */
+		start_idx = sel_hdr_match->start_idx;
+		for (jdx = 0; jdx < sel_hdr_match->num_entries; jdx++) {
+			m_field = &hdr_field[jdx + BNXT_ULP_HDR_FIELD_LAST - 1];
+			sf = &ulp_field_match[start_idx + jdx];
+			switch (sf->mask_opcode) {
+			case BNXT_ULP_FMF_MASK_ANY:
+				match &= ulp_field_mask_is_zero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_EXACT:
+				match &= ulp_field_mask_is_ones(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_WILDCARD:
+				match &= ulp_field_mask_notzero(m_field->mask,
+								m_field->size);
+				break;
+			case BNXT_ULP_FMF_MASK_IGNORE:
+			default:
+				break;
+			}
+			if (!match)
+				break;
+		}
+		if (match) {
+			BNXT_TF_DBG(DEBUG,
+				    "Found matching pattern template %d\n",
+				    sel_hdr_match->class_tmpl_id);
+			*class_id = sel_hdr_match->class_tmpl_id;
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching template\n");
+	*class_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
new file mode 100644
index 0000000..57a161d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef ULP_MATCHER_H_
+#define ULP_MATCHER_H_
+
+#include <rte_log.h>
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the pattern masks against the flow templates.
+ */
+int32_t
+ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
+			  struct ulp_rte_hdr_bitmap	   *hdr_bitmap,
+			  struct ulp_rte_hdr_field	   *hdr_field,
+			  struct ulp_rte_act_bitmap	   *act_bitmap,
+			  uint32_t			   *class_id);
+
+#endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 9d52937..9fc4b08 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -798,6 +798,121 @@ struct bnxt_ulp_mapper_class_key_field_info ulp_class_key_field_list[] = {
 	}
 };
 
+struct bnxt_ulp_header_match_info ulp_ingress_hdr_match_list[] = {
+	{
+	.hdr_bitmap = { .bits =
+		BNXT_ULP_HDR_BIT_O_ETH |
+		BNXT_ULP_HDR_BIT_O_IPV4 |
+		BNXT_ULP_HDR_BIT_O_UDP },
+	.start_idx = 0,
+	.num_entries = 24,
+	.class_tmpl_id = 0,
+	.act_vnic = 0
+	}
+};
+
+struct bnxt_ulp_header_match_info ulp_egress_hdr_match_list[] = {
+};
+
+struct bnxt_ulp_matcher_field_info ulp_field_match[] = {
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_ANY,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_EXACT,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	},
+	{
+	.mask_opcode = BNXT_ULP_FMF_MASK_IGNORE,
+	.spec_opcode = BNXT_ULP_FMF_SPEC_IGNORE
+	}
+};
+
 struct bnxt_ulp_mapper_result_field_info ulp_class_result_field_list[] = {
 	{
 	.field_bit_size = 10,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 957b21a..319500a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -13,6 +13,8 @@
 
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
+#define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
@@ -45,6 +47,31 @@ enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_LAST             = 0x0000000008000000
 };
 
+enum bnxt_ulp_hdr_bit {
+	BNXT_ULP_HDR_BIT_SVIF                = 0x0000000000000001,
+	BNXT_ULP_HDR_BIT_O_ETH               = 0x0000000000000002,
+	BNXT_ULP_HDR_BIT_OO_VLAN             = 0x0000000000000004,
+	BNXT_ULP_HDR_BIT_OI_VLAN             = 0x0000000000000008,
+	BNXT_ULP_HDR_BIT_O_L3                = 0x0000000000000010,
+	BNXT_ULP_HDR_BIT_O_IPV4              = 0x0000000000000020,
+	BNXT_ULP_HDR_BIT_O_IPV6              = 0x0000000000000040,
+	BNXT_ULP_HDR_BIT_O_L4                = 0x0000000000000080,
+	BNXT_ULP_HDR_BIT_O_TCP               = 0x0000000000000100,
+	BNXT_ULP_HDR_BIT_O_UDP               = 0x0000000000000200,
+	BNXT_ULP_HDR_BIT_T_VXLAN             = 0x0000000000000400,
+	BNXT_ULP_HDR_BIT_T_GRE               = 0x0000000000000800,
+	BNXT_ULP_HDR_BIT_I_ETH               = 0x0000000000001000,
+	BNXT_ULP_HDR_BIT_IO_VLAN             = 0x0000000000002000,
+	BNXT_ULP_HDR_BIT_II_VLAN             = 0x0000000000004000,
+	BNXT_ULP_HDR_BIT_I_L3                = 0x0000000000008000,
+	BNXT_ULP_HDR_BIT_I_IPV4              = 0x0000000000010000,
+	BNXT_ULP_HDR_BIT_I_IPV6              = 0x0000000000020000,
+	BNXT_ULP_HDR_BIT_I_L4                = 0x0000000000040000,
+	BNXT_ULP_HDR_BIT_I_TCP               = 0x0000000000080000,
+	BNXT_ULP_HDR_BIT_I_UDP               = 0x0000000000100000,
+	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
@@ -67,12 +94,25 @@ enum bnxt_ulp_fmf_mask {
 	BNXT_ULP_FMF_MASK_LAST
 };
 
+enum bnxt_ulp_fmf_spec {
+	BNXT_ULP_FMF_SPEC_IGNORE = 0,
+	BNXT_ULP_FMF_SPEC_LAST = 1
+};
+
 enum bnxt_ulp_mark_enable {
 	BNXT_ULP_MARK_ENABLE_NO = 0,
 	BNXT_ULP_MARK_ENABLE_YES = 1,
 	BNXT_ULP_MARK_ENABLE_LAST = 2
 };
 
+enum bnxt_ulp_hdr_field {
+	BNXT_ULP_HDR_FIELD_MPLS_TAG_NUM = 0,
+	BNXT_ULP_HDR_FIELD_O_VTAG_NUM = 1,
+	BNXT_ULP_HDR_FIELD_I_VTAG_NUM = 2,
+	BNXT_ULP_HDR_FIELD_SVIF_INDEX = 3,
+	BNXT_ULP_HDR_FIELD_LAST = 4
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index b7094c5..dd06fb1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -29,6 +29,11 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+struct bnxt_ulp_matcher_field_info {
+	enum bnxt_ulp_fmf_mask	mask_opcode;
+	enum bnxt_ulp_fmf_spec	spec_opcode;
+};
+
 struct ulp_rte_act_bitmap {
 	uint64_t	bits;
 };
@@ -41,6 +46,22 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Matcher structures */
+struct bnxt_ulp_header_match_info {
+	struct ulp_rte_hdr_bitmap		hdr_bitmap;
+	uint32_t				start_idx;
+	uint32_t				num_entries;
+	uint32_t				class_tmpl_id;
+	uint32_t				act_vnic;
+};
+
+/* Flow Matcher templates Structure Array defined in template source*/
+extern struct bnxt_ulp_header_match_info  ulp_ingress_hdr_match_list[];
+extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
+
+/* Flow field match Information Structure Array defined in template source*/
+extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 24/34] net/bnxt: match rte flow actions with flow template actions
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (22 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
                         ` (12 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Takes act_bitmap generated from the rte_flow_actions
2. Iterates through the static act_bitmap list
3. Returns success if a match is found, otherwise an error

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_matcher.c         | 36 +++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_matcher.h         |  9 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 12 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |  2 ++
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h | 10 ++++++++
 5 files changed, 69 insertions(+)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
index f367e4c..ec4121d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -150,3 +150,39 @@ ulp_matcher_pattern_match(enum ulp_direction_type   dir,
 	*class_id = 0;
 	return BNXT_TF_RC_ERROR;
 }
+
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type		dir,
+			 struct ulp_rte_act_bitmap		*act_bitmap,
+			 uint32_t				*act_id)
+{
+	struct bnxt_ulp_action_match_info	*sel_act_match;
+	uint32_t				act_num, idx;
+
+	/* Select the ingress or egress template to match against */
+	if (dir == ULP_DIR_INGRESS) {
+		sel_act_match = ulp_ingress_act_match_list;
+		act_num = BNXT_ULP_INGRESS_ACT_MATCH_SZ;
+	} else {
+		sel_act_match = ulp_egress_act_match_list;
+		act_num = BNXT_ULP_EGRESS_ACT_MATCH_SZ;
+	}
+
+	/* Loop through the list of action templates to find the match */
+	for (idx = 0; idx < act_num; idx++, sel_act_match++) {
+		if (!ULP_BITSET_CMP(&sel_act_match->act_bitmap,
+				    act_bitmap)) {
+			*act_id = sel_act_match->act_tmpl_id;
+			BNXT_TF_DBG(DEBUG, "Found matching act template %u\n",
+				    *act_id);
+			return BNXT_TF_RC_SUCCESS;
+		}
+	}
+	BNXT_TF_DBG(DEBUG, "Did not find any matching action template\n");
+	*act_id = 0;
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.h b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
index 57a161d..c818bbe 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.h
@@ -23,4 +23,13 @@ ulp_matcher_pattern_match(enum ulp_direction_type	    dir,
 			  struct ulp_rte_act_bitmap	   *act_bitmap,
 			  uint32_t			   *class_id);
 
+/*
+ * Function to handle the matching of RTE Flows and validating
+ * the action against the flow templates.
+ */
+int32_t
+ulp_matcher_action_match(enum ulp_direction_type	dir,
+			 struct ulp_rte_act_bitmap	*act_bitmap,
+			 uint32_t			*act_id);
+
 #endif /* ULP_MATCHER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 9fc4b08..5a5b1f1 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -1121,6 +1121,18 @@ struct bnxt_ulp_mapper_ident_info ulp_ident_list[] = {
 	}
 };
 
+struct bnxt_ulp_action_match_info ulp_ingress_act_match_list[] = {
+	{
+	.act_bitmap = { .bits =
+		BNXT_ULP_ACTION_BIT_MARK |
+		BNXT_ULP_ACTION_BIT_RSS },
+	.act_tmpl_id = 0
+	}
+};
+
+struct bnxt_ulp_action_match_info ulp_egress_act_match_list[] = {
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_act_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 319500a..f4850bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -15,6 +15,8 @@
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_INGRESS_HDR_MATCH_SZ 2
 #define BNXT_ULP_EGRESS_HDR_MATCH_SZ 1
+#define BNXT_ULP_INGRESS_ACT_MATCH_SZ 2
+#define BNXT_ULP_EGRESS_ACT_MATCH_SZ 1
 
 enum bnxt_ulp_action_bit {
 	BNXT_ULP_ACTION_BIT_MARK             = 0x0000000000000001,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index dd06fb1..0e811ec 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -62,6 +62,16 @@ extern struct bnxt_ulp_header_match_info  ulp_egress_hdr_match_list[];
 /* Flow field match Information Structure Array defined in template source*/
 extern struct bnxt_ulp_matcher_field_info	ulp_field_match[];
 
+/* Flow Matcher Action structures */
+struct bnxt_ulp_action_match_info {
+	struct ulp_rte_act_bitmap		act_bitmap;
+	uint32_t				act_tmpl_id;
+};
+
+/* Flow Matcher templates Structure Array defined in template source */
+extern struct bnxt_ulp_action_match_info  ulp_ingress_act_match_list[];
+extern struct bnxt_ulp_action_match_info  ulp_egress_act_match_list[];
+
 /* Device specific parameters */
 struct bnxt_ulp_device_params {
 	uint8_t				description[16];
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 25/34] net/bnxt: add support for rte flow item parsing
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (23 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
                         ` (11 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

1. Registers a callback handler for each rte_flow_item type, if it
   is supported
2. Iterates through each rte_flow_item till RTE_FLOW_ITEM_TYPE_END
3. Invokes the header call back handler
4. Each header call back handler will populate the respective fields
   in hdr_field & hdr_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |   1 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 767 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      | 120 ++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 196 +++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  26 +
 6 files changed, 1117 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
 create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 455fd5c..5e2d751 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -64,6 +64,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_template_db.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_rte_parser.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
new file mode 100644
index 0000000..3ffdcbd
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -0,0 +1,767 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_utils.h"
+#include "tfp.h"
+
+/* Inline Func to read integer that is stored in big endian format */
+static inline void ulp_util_field_int_read(uint8_t *buffer,
+					   uint32_t *val)
+{
+	uint32_t temp_val;
+
+	memcpy(&temp_val, buffer, sizeof(uint32_t));
+	*val = rte_be_to_cpu_32(temp_val);
+}
+
+/* Inline Func to write integer that is stored in big endian format */
+static inline void ulp_util_field_int_write(uint8_t *buffer,
+					    uint32_t val)
+{
+	uint32_t temp_val = rte_cpu_to_be_32(val);
+
+	memcpy(buffer, &temp_val, sizeof(uint32_t));
+}
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field *hdr_field)
+{
+	const struct rte_flow_item *item = pattern;
+	uint32_t field_idx = BNXT_ULP_HDR_FIELD_LAST;
+	uint32_t vlan_idx = 0;
+	struct bnxt_ulp_rte_hdr_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (item && item->type != RTE_FLOW_ITEM_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_hdr_info[item->type];
+		if (hdr_info->hdr_type ==
+		    BNXT_ULP_HDR_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support type %d\n",
+				    item->type);
+			return BNXT_TF_RC_PARSE_ERR;
+		} else if (hdr_info->hdr_type ==
+			   BNXT_ULP_HDR_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_hdr_func) {
+				if (hdr_info->proto_hdr_func(item,
+							     hdr_bitmap,
+							     hdr_field,
+							     &field_idx,
+							     &vlan_idx) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+static int32_t
+ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			enum rte_flow_item_type proto,
+			uint32_t svif,
+			uint32_t mask)
+{
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF)) {
+		BNXT_TF_DBG(ERR,
+			    "SVIF already set,"
+			    " multiple sources not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* TBD: Check for any mapping errors for svif */
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_SVIF. */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	if (proto != RTE_FLOW_ITEM_TYPE_PF) {
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec,
+		       &svif, sizeof(svif));
+		memcpy(hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask,
+		       &mask, sizeof(mask));
+		hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	}
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, 0, 0);
+}
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item *item,
+		       struct ulp_rte_hdr_bitmap *hdr_bitmap,
+		       struct ulp_rte_hdr_field	 *hdr_field,
+		       uint32_t *field_idx __rte_unused,
+		       uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vf *vf_spec, *vf_mask;
+	uint32_t svif = 0, mask = 0;
+
+	vf_spec = item->spec;
+	vf_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields.
+	 */
+	if (vf_spec)
+		svif = vf_spec->id;
+	if (vf_mask)
+		mask = vf_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item port id  Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item *item,
+			    struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			    struct ulp_rte_hdr_field *hdr_field,
+			    uint32_t *field_idx __rte_unused,
+			    uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_port_id *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for Port into hdr_field using port id
+	 * header fields.
+	 */
+	if (port_spec)
+		svif = port_spec->id;
+	if (port_mask)
+		mask = port_mask->id;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item phy port Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
+			     struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			     struct ulp_rte_hdr_field *hdr_field,
+			     uint32_t *field_idx __rte_unused,
+			     uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_phy_port *port_spec, *port_mask;
+	uint32_t svif = 0, mask = 0;
+
+	port_spec = item->spec;
+	port_mask = item->mask;
+
+	/* Copy the rte_flow_item for phy port into hdr_field */
+	if (port_spec)
+		svif = port_spec->index;
+	if (port_mask)
+		mask = port_mask->index;
+
+	return ulp_rte_parser_svif_set(hdr_bitmap, hdr_field,
+				       item->type, svif, mask);
+}
+
+/* Function to handle the parsing of RTE Flow item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_eth *eth_spec, *eth_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+	uint64_t set_flag = 0;
+
+	eth_spec = item->spec;
+	eth_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for eth into hdr_field using ethernet
+	 * header fields
+	 */
+	if (eth_spec) {
+		hdr_field[idx].size = sizeof(eth_spec->dst.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->dst.addr_bytes,
+		       sizeof(eth_spec->dst.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->src.addr_bytes);
+		memcpy(hdr_field[idx++].spec, eth_spec->src.addr_bytes,
+		       sizeof(eth_spec->src.addr_bytes));
+		hdr_field[idx].size = sizeof(eth_spec->type);
+		memcpy(hdr_field[idx++].spec, &eth_spec->type,
+		       sizeof(eth_spec->type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_ETH_NUM;
+	}
+
+	if (eth_mask) {
+		memcpy(hdr_field[mdx++].mask, eth_mask->dst.addr_bytes,
+		       sizeof(eth_mask->dst.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, eth_mask->src.addr_bytes,
+		       sizeof(eth_mask->src.addr_bytes));
+		memcpy(hdr_field[mdx++].mask, &eth_mask->type,
+		       sizeof(eth_mask->type));
+	}
+	/* Add number of vlan header elements */
+	*field_idx = idx + BNXT_ULP_PROTO_HDR_VLAN_NUM;
+	*vlan_idx = idx;
+
+	/* Update the hdr_bitmap with BNXT_ULP_HDR_PROTO_I_ETH */
+	set_flag = ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+	if (set_flag)
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+	else
+		ULP_BITMAP_RESET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_ETH);
+
+	/* update the hdr_bitmap with BNXT_ULP_HDR_PROTO_O_ETH */
+	ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx)
+{
+	const struct rte_flow_item_vlan *vlan_spec, *vlan_mask;
+	uint32_t idx = *vlan_idx;
+	uint32_t mdx = *vlan_idx;
+	uint16_t vlan_tag, priority;
+	uint32_t outer_vtag_num = 0, inner_vtag_num = 0;
+	uint8_t *outer_tag_buffer;
+	uint8_t *inner_tag_buffer;
+
+	vlan_spec = item->spec;
+	vlan_mask = item->mask;
+	outer_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].spec;
+	inner_tag_buffer = hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].spec;
+
+	/*
+	 * Copy the rte_flow_item for vlan into hdr_field using Vlan
+	 * header fields
+	 */
+	if (vlan_spec) {
+		vlan_tag = ntohs(vlan_spec->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		hdr_field[idx].size = sizeof(priority);
+		memcpy(hdr_field[idx++].spec, &priority, sizeof(priority));
+		hdr_field[idx].size = sizeof(vlan_tag);
+		memcpy(hdr_field[idx++].spec, &vlan_tag, sizeof(vlan_tag));
+		hdr_field[idx].size = sizeof(vlan_spec->inner_type);
+		memcpy(hdr_field[idx++].spec, &vlan_spec->inner_type,
+		       sizeof(vlan_spec->inner_type));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_S_VLAN_NUM;
+	}
+
+	if (vlan_mask) {
+		vlan_tag = ntohs(vlan_mask->tci);
+		priority = htons(vlan_tag >> 13);
+		vlan_tag &= 0xfff;
+		vlan_tag = htons(vlan_tag);
+
+		memcpy(hdr_field[mdx++].mask, &priority, sizeof(priority));
+		memcpy(hdr_field[mdx++].mask, &vlan_tag, sizeof(vlan_tag));
+		memcpy(hdr_field[mdx++].mask, &vlan_mask->inner_type,
+		       sizeof(vlan_mask->inner_type));
+	}
+	/* Set the vlan index to new incremented value */
+	*vlan_idx = idx;
+
+	/* Get the outer tag and inner tag counts */
+	ulp_util_field_int_read(outer_tag_buffer, &outer_vtag_num);
+	ulp_util_field_int_read(inner_tag_buffer, &inner_vtag_num);
+
+	/* Update the hdr_bitmap of the vlans */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+	    !ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OO_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_OI_VLAN)) {
+		/* Set the outer vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_OI_VLAN);
+		outer_vtag_num++;
+		ulp_util_field_int_write(outer_tag_buffer, outer_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_O_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_IO_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_IO_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else if (ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_O_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OO_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_OI_VLAN) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_I_ETH) &&
+		   ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				    BNXT_ULP_HDR_BIT_IO_VLAN) &&
+		   !ULP_BITMAP_ISSET(hdr_bitmap->bits,
+				     BNXT_ULP_HDR_BIT_II_VLAN)) {
+		/* Set the inner vlan bit and update the vlan tag num */
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_II_VLAN);
+		inner_vtag_num++;
+		ulp_util_field_int_write(inner_tag_buffer, inner_vtag_num);
+		hdr_field[BNXT_ULP_HDR_FIELD_I_VTAG_NUM].size =
+							    sizeof(uint32_t);
+	} else {
+		BNXT_TF_DBG(ERR, "Error Parsing:Vlan hdr found withtout eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv4_spec = item->spec;
+	ipv4_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv4_spec) {
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.version_ihl);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.version_ihl,
+		       sizeof(ipv4_spec->hdr.version_ihl));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.type_of_service);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.type_of_service,
+		       sizeof(ipv4_spec->hdr.type_of_service));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.total_length);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.total_length,
+		       sizeof(ipv4_spec->hdr.total_length));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.packet_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.packet_id,
+		       sizeof(ipv4_spec->hdr.packet_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.fragment_offset);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.fragment_offset,
+		       sizeof(ipv4_spec->hdr.fragment_offset));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.time_to_live);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.time_to_live,
+		       sizeof(ipv4_spec->hdr.time_to_live));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.next_proto_id);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.next_proto_id,
+		       sizeof(ipv4_spec->hdr.next_proto_id));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.hdr_checksum);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.hdr_checksum,
+		       sizeof(ipv4_spec->hdr.hdr_checksum));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.src_addr,
+		       sizeof(ipv4_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv4_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv4_spec->hdr.dst_addr,
+		       sizeof(ipv4_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV4_NUM;
+	}
+
+	if (ipv4_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.version_ihl,
+		       sizeof(ipv4_mask->hdr.version_ihl));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.type_of_service,
+		       sizeof(ipv4_mask->hdr.type_of_service));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.total_length,
+		       sizeof(ipv4_mask->hdr.total_length));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.packet_id,
+		       sizeof(ipv4_mask->hdr.packet_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.fragment_offset,
+		       sizeof(ipv4_mask->hdr.fragment_offset));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.time_to_live,
+		       sizeof(ipv4_mask->hdr.time_to_live));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.next_proto_id,
+		       sizeof(ipv4_mask->hdr.next_proto_id));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.hdr_checksum,
+		       sizeof(ipv4_mask->hdr.hdr_checksum));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.src_addr,
+		       sizeof(ipv4_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv4_mask->hdr.dst_addr,
+		       sizeof(ipv4_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* Number of ipv4 header elements */
+
+	/* Set the ipv4 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item *item,
+			 struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			 struct ulp_rte_hdr_field *hdr_field,
+			 uint32_t *field_idx,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	ipv6_spec = item->spec;
+	ipv6_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3)) {
+		BNXT_TF_DBG(ERR, "Parse Error: 3'rd L3 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (ipv6_spec) {
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.vtc_flow);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.vtc_flow,
+		       sizeof(ipv6_spec->hdr.vtc_flow));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.payload_len);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.payload_len,
+		       sizeof(ipv6_spec->hdr.payload_len));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.proto);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.proto,
+		       sizeof(ipv6_spec->hdr.proto));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.hop_limits);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.hop_limits,
+		       sizeof(ipv6_spec->hdr.hop_limits));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.src_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.src_addr,
+		       sizeof(ipv6_spec->hdr.src_addr));
+		hdr_field[idx].size = sizeof(ipv6_spec->hdr.dst_addr);
+		memcpy(hdr_field[idx++].spec, &ipv6_spec->hdr.dst_addr,
+		       sizeof(ipv6_spec->hdr.dst_addr));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_IPV6_NUM;
+	}
+
+	if (ipv6_mask) {
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.vtc_flow,
+		       sizeof(ipv6_mask->hdr.vtc_flow));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.payload_len,
+		       sizeof(ipv6_mask->hdr.payload_len));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.proto,
+		       sizeof(ipv6_mask->hdr.proto));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.hop_limits,
+		       sizeof(ipv6_mask->hdr.hop_limits));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.src_addr,
+		       sizeof(ipv6_mask->hdr.src_addr));
+		memcpy(hdr_field[mdx++].mask, &ipv6_mask->hdr.dst_addr,
+		       sizeof(ipv6_mask->hdr.dst_addr));
+	}
+	*field_idx = idx; /* add number of ipv6 header elements */
+
+	/* Set the ipv6 header bitmap and computed l3 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L3);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_IPV6);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L3);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_udp *udp_spec, *udp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	udp_spec = item->spec;
+	udp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Err:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (udp_spec) {
+		hdr_field[idx].size = sizeof(udp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.src_port,
+		       sizeof(udp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dst_port,
+		       sizeof(udp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_len);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_len,
+		       sizeof(udp_spec->hdr.dgram_len));
+		hdr_field[idx].size = sizeof(udp_spec->hdr.dgram_cksum);
+		memcpy(hdr_field[idx++].spec, &udp_spec->hdr.dgram_cksum,
+		       sizeof(udp_spec->hdr.dgram_cksum));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_UDP_NUM;
+	}
+
+	if (udp_mask) {
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.src_port,
+		       sizeof(udp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dst_port,
+		       sizeof(udp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_len,
+		       sizeof(udp_mask->hdr.dgram_len));
+		memcpy(hdr_field[mdx++].mask, &udp_mask->hdr.dgram_cksum,
+		       sizeof(udp_mask->hdr.dgram_cksum));
+	}
+	*field_idx = idx; /* Add number of UDP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item *item,
+			struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			struct ulp_rte_hdr_field *hdr_field,
+			uint32_t *field_idx,
+			uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_tcp *tcp_spec, *tcp_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	tcp_spec = item->spec;
+	tcp_mask = item->mask;
+
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4)) {
+		BNXT_TF_DBG(ERR, "Parse Error:Third L4 header not supported\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/*
+	 * Copy the rte_flow_item for ipv4 into hdr_field using ipv4
+	 * header fields
+	 */
+	if (tcp_spec) {
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.src_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.src_port,
+		       sizeof(tcp_spec->hdr.src_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.dst_port);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.dst_port,
+		       sizeof(tcp_spec->hdr.dst_port));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.sent_seq);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.sent_seq,
+		       sizeof(tcp_spec->hdr.sent_seq));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.recv_ack);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.recv_ack,
+		       sizeof(tcp_spec->hdr.recv_ack));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.data_off);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.data_off,
+		       sizeof(tcp_spec->hdr.data_off));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_flags);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_flags,
+		       sizeof(tcp_spec->hdr.tcp_flags));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.rx_win);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.rx_win,
+		       sizeof(tcp_spec->hdr.rx_win));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.cksum);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.cksum,
+		       sizeof(tcp_spec->hdr.cksum));
+		hdr_field[idx].size = sizeof(tcp_spec->hdr.tcp_urp);
+		memcpy(hdr_field[idx++].spec, &tcp_spec->hdr.tcp_urp,
+		       sizeof(tcp_spec->hdr.tcp_urp));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_TCP_NUM;
+	}
+
+	if (tcp_mask) {
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.src_port,
+		       sizeof(tcp_mask->hdr.src_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.dst_port,
+		       sizeof(tcp_mask->hdr.dst_port));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.sent_seq,
+		       sizeof(tcp_mask->hdr.sent_seq));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.recv_ack,
+		       sizeof(tcp_mask->hdr.recv_ack));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.data_off,
+		       sizeof(tcp_mask->hdr.data_off));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_flags,
+		       sizeof(tcp_mask->hdr.tcp_flags));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.rx_win,
+		       sizeof(tcp_mask->hdr.rx_win));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.cksum,
+		       sizeof(tcp_mask->hdr.cksum));
+		memcpy(hdr_field[mdx++].mask, &tcp_mask->hdr.tcp_urp,
+		       sizeof(tcp_mask->hdr.tcp_urp));
+	}
+	*field_idx = idx; /* add number of TCP header elements */
+
+	/* Set the udp header bitmap and computed l4 header bitmaps */
+	if (ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_UDP) ||
+	    ULP_BITMAP_ISSET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP)) {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_I_L4);
+	} else {
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_TCP);
+		ULP_BITMAP_SET(hdr_bitmap->bits, BNXT_ULP_HDR_BIT_O_L4);
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item *item,
+			  struct ulp_rte_hdr_bitmap *hdrbitmap,
+			  struct ulp_rte_hdr_field *hdr_field,
+			  uint32_t *field_idx,
+			  uint32_t *vlan_idx __rte_unused)
+{
+	const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask;
+	uint32_t idx = *field_idx;
+	uint32_t mdx = *field_idx;
+
+	vxlan_spec = item->spec;
+	vxlan_mask = item->mask;
+
+	/*
+	 * Copy the rte_flow_item for vxlan into hdr_field using vxlan
+	 * header fields
+	 */
+	if (vxlan_spec) {
+		hdr_field[idx].size = sizeof(vxlan_spec->flags);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->flags,
+		       sizeof(vxlan_spec->flags));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd0);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd0,
+		       sizeof(vxlan_spec->rsvd0));
+		hdr_field[idx].size = sizeof(vxlan_spec->vni);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->vni,
+		       sizeof(vxlan_spec->vni));
+		hdr_field[idx].size = sizeof(vxlan_spec->rsvd1);
+		memcpy(hdr_field[idx++].spec, &vxlan_spec->rsvd1,
+		       sizeof(vxlan_spec->rsvd1));
+	} else {
+		idx += BNXT_ULP_PROTO_HDR_VXLAN_NUM;
+	}
+
+	if (vxlan_mask) {
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->flags,
+		       sizeof(vxlan_mask->flags));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd0,
+		       sizeof(vxlan_mask->rsvd0));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->vni,
+		       sizeof(vxlan_mask->vni));
+		memcpy(hdr_field[mdx++].mask, &vxlan_mask->rsvd1,
+		       sizeof(vxlan_mask->rsvd1));
+	}
+	*field_idx = idx; /* Add number of vxlan header elements */
+
+	/* Update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(hdrbitmap->bits, BNXT_ULP_HDR_BIT_T_VXLAN);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow item void Header */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
+			 struct ulp_rte_hdr_bitmap *hdr_bit __rte_unused,
+			 struct ulp_rte_hdr_field *hdr_field __rte_unused,
+			 uint32_t *field_idx __rte_unused,
+			 uint32_t *vlan_idx __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
new file mode 100644
index 0000000..3a7845d
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _ULP_RTE_PARSER_H_
+#define _ULP_RTE_PARSER_H_
+
+#include <rte_log.h>
+#include <rte_flow.h>
+#include <rte_flow_driver.h>
+#include "ulp_template_db.h"
+#include "ulp_template_struct.h"
+
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow items into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
+			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
+			      struct ulp_rte_hdr_field  *hdr_field);
+
+/* Function to handle the parsing of RTE Flow item PF Header. */
+int32_t
+ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item VF Header. */
+int32_t
+ulp_rte_vf_hdr_handler(const struct rte_flow_item	*item,
+		       struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+		       struct ulp_rte_hdr_field		*hdr_field,
+		       uint32_t				*field_idx,
+		       uint32_t				*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
+			    struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			    struct ulp_rte_hdr_field	*hdr_field,
+			    uint32_t			*field_idx,
+			    uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item port id Header. */
+int32_t
+ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
+			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			     struct ulp_rte_hdr_field	*hdr_field,
+			     uint32_t			*field_idx,
+			     uint32_t			*vlan_idx);
+
+/* Function to handle the RTE item Ethernet Header. */
+int32_t
+ulp_rte_eth_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vlan Header. */
+int32_t
+ulp_rte_vlan_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV4 Header. */
+int32_t
+ulp_rte_ipv4_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item IPV6 Header. */
+int32_t
+ulp_rte_ipv6_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item UDP Header. */
+int32_t
+ulp_rte_udp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item TCP Header. */
+int32_t
+ulp_rte_tcp_hdr_handler(const struct rte_flow_item	*item,
+			struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			struct ulp_rte_hdr_field	*hdr_field,
+			uint32_t			*field_idx,
+			uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item Vxlan Header. */
+int32_t
+ulp_rte_vxlan_hdr_handler(const struct rte_flow_item	*item,
+			  struct ulp_rte_hdr_bitmap	*hdrbitmap,
+			  struct ulp_rte_hdr_field	*hdr_field,
+			  uint32_t			*field_idx,
+			  uint32_t			*vlan_idx);
+
+/* Function to handle the parsing of RTE Flow item void Header. */
+int32_t
+ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
+			 struct ulp_rte_hdr_bitmap	*hdr_bitmap,
+			 struct ulp_rte_hdr_field	*hdr_field,
+			 uint32_t			*field_idx,
+			 uint32_t			*vlan_idx);
+
+#endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 5a5b1f1..6c214b2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -11,6 +11,7 @@
 #include "ulp_template_db.h"
 #include "ulp_template_field_db.h"
 #include "ulp_template_struct.h"
+#include "ulp_rte_parser.h"
 
 uint32_t ulp_act_prop_map_table[] = {
 	[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ] =
@@ -110,6 +111,201 @@ struct bnxt_ulp_device_params ulp_device_params[] = {
 	}
 };
 
+struct bnxt_ulp_rte_hdr_info ulp_hdr_info[] = {
+	[RTE_FLOW_ITEM_TYPE_END] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_END,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VOID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_void_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_INVERT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ANY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_pf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VF] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vf_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PHY_PORT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_phy_port_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_PORT_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_port_id_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_RAW] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_eth_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_VLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv4_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_ipv6_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_UDP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_udp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_TCP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_tcp_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_SCTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_SUPPORTED,
+		.proto_hdr_func          = ulp_rte_vxlan_hdr_handler
+	},
+	[RTE_FLOW_ITEM_TYPE_E_TAG] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NVGRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MPLS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_FUZZY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTPU] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ESP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GENEVE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_VXLAN_GPE] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IPV6_EXT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NS] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_NA] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_SLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_ICMP6_ND_OPT_TLA_ETH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_MARK] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_META] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_GTP_PSC] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOES] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOED] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_NSH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_IGMP] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_AH] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	},
+	[RTE_FLOW_ITEM_TYPE_HIGIG2] = {
+		.hdr_type                = BNXT_ULP_HDR_TYPE_NOT_SUPPORTED,
+		.proto_hdr_func          = NULL
+	}
+};
+
 struct bnxt_ulp_mapper_tbl_list_info ulp_class_tmpl_list[] = {
 	[((0 << BNXT_ULP_LOG2_MAX_NUM_DEV) | BNXT_ULP_DEVICE_ID_WH_PLUS)] = {
 	.device_name = BNXT_ULP_DEVICE_ID_WH_PLUS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index f4850bf..906b542 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -115,6 +115,13 @@ enum bnxt_ulp_hdr_field {
 	BNXT_ULP_HDR_FIELD_LAST = 4
 };
 
+enum bnxt_ulp_hdr_type {
+	BNXT_ULP_HDR_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_HDR_TYPE_SUPPORTED = 1,
+	BNXT_ULP_HDR_TYPE_END = 2,
+	BNXT_ULP_HDR_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_mask_opc {
 	BNXT_ULP_MASK_OPC_SET_TO_CONSTANT = 0,
 	BNXT_ULP_MASK_OPC_SET_TO_HDR_FIELD = 1,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0e811ec..0699634 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -17,6 +17,18 @@
 #include "rte_flow.h"
 #include "tf_core.h"
 
+/* Number of fields for each protocol */
+#define BNXT_ULP_PROTO_HDR_SVIF_NUM	1
+#define BNXT_ULP_PROTO_HDR_ETH_NUM	3
+#define BNXT_ULP_PROTO_HDR_S_VLAN_NUM	3
+#define BNXT_ULP_PROTO_HDR_VLAN_NUM	6
+#define BNXT_ULP_PROTO_HDR_IPV4_NUM	10
+#define BNXT_ULP_PROTO_HDR_IPV6_NUM	6
+#define BNXT_ULP_PROTO_HDR_UDP_NUM	4
+#define BNXT_ULP_PROTO_HDR_TCP_NUM	9
+#define BNXT_ULP_PROTO_HDR_VXLAN_NUM	4
+#define BNXT_ULP_PROTO_HDR_MAX		128
+
 struct ulp_rte_hdr_bitmap {
 	uint64_t	bits;
 };
@@ -29,6 +41,20 @@ struct ulp_rte_hdr_field {
 	uint32_t	size;
 };
 
+/* Flow Parser Header Information Structure */
+struct bnxt_ulp_rte_hdr_info {
+	enum bnxt_ulp_hdr_type					hdr_type;
+	/* Flow Parser Protocol Header Function Prototype */
+	int (*proto_hdr_func)(const struct rte_flow_item	*item_list,
+			      struct ulp_rte_hdr_bitmap		*hdr_bitmap,
+			      struct ulp_rte_hdr_field		*hdr_field,
+			      uint32_t				*field_idx,
+			      uint32_t				*vlan_idx);
+};
+
+/* Flow Parser Header Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_hdr_info	ulp_hdr_info[];
+
 struct bnxt_ulp_matcher_field_info {
 	enum bnxt_ulp_fmf_mask	mask_opcode;
 	enum bnxt_ulp_fmf_spec	spec_opcode;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 26/34] net/bnxt: add support for rte flow action parsing
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (24 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
                         ` (10 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Registers a callback handler for each rte_flow_action type, if
   it is supported
2. Iterates through each rte_flow_action till RTE_FLOW_ACTION_TYPE_END
3. Invokes the action call back handler
4. Each action call back handler will populate the respective fields in
   act_details & act_bitmap

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h      |   7 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c      | 441 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h      |  85 ++++-
 drivers/net/bnxt/tf_ulp/ulp_template_db.c     | 199 ++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db.h     |   7 +
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h |  13 +
 6 files changed, 751 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index e4ebfc5..f417579 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -31,6 +31,13 @@ enum bnxt_tf_rc {
 	BNXT_TF_RC_SUCCESS	= 0
 };
 
+/* eth IPv4 Type */
+enum bnxt_ulp_eth_ip_type {
+	BNXT_ULP_ETH_IPV4 = 4,
+	BNXT_ULP_ETH_IPV6 = 5,
+	BNXT_ULP_MAX_ETH_IP_TYPE = 0
+};
+
 /* ulp direction Type */
 enum ulp_direction_type {
 	ULP_DIR_INGRESS,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 3ffdcbd..7a31b43 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -30,6 +30,21 @@ static inline void ulp_util_field_int_write(uint8_t *buffer,
 	memcpy(buffer, &temp_val, sizeof(uint32_t));
 }
 
+/* Utility function to skip the void items. */
+static inline int32_t
+ulp_rte_item_skip_void(const struct rte_flow_item **item, uint32_t increment)
+{
+	if (!*item)
+		return 0;
+	if (increment)
+		(*item)++;
+	while ((*item) && (*item)->type == RTE_FLOW_ITEM_TYPE_VOID)
+		(*item)++;
+	if (*item)
+		return 1;
+	return 0;
+}
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -73,6 +88,45 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 	return BNXT_TF_RC_SUCCESS;
 }
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
+			      struct ulp_rte_act_bitmap *act_bitmap,
+			      struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action *action_item = actions;
+	struct bnxt_ulp_rte_act_info *hdr_info;
+
+	/* Parse all the items in the pattern */
+	while (action_item && action_item->type != RTE_FLOW_ACTION_TYPE_END) {
+		/* get the header information from the flow_hdr_info table */
+		hdr_info = &ulp_act_info[action_item->type];
+		if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_NOT_SUPPORTED) {
+			BNXT_TF_DBG(ERR,
+				    "Truflow parser does not support act %u\n",
+				    action_item->type);
+			return BNXT_TF_RC_ERROR;
+		} else if (hdr_info->act_type ==
+		    BNXT_ULP_ACT_TYPE_SUPPORTED) {
+			/* call the registered callback handler */
+			if (hdr_info->proto_act_func) {
+				if (hdr_info->proto_act_func(action_item,
+							     act_bitmap,
+							     act_prop) !=
+				    BNXT_TF_RC_SUCCESS) {
+					return BNXT_TF_RC_ERROR;
+				}
+			}
+		}
+		action_item++;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 static int32_t
 ulp_rte_parser_svif_set(struct ulp_rte_hdr_bitmap *hdr_bitmap,
@@ -765,3 +819,390 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item *item __rte_unused,
 {
 	return BNXT_TF_RC_SUCCESS;
 }
+
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act __rte_unused,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action *action_item,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_mark *mark;
+	uint32_t mark_id = 0;
+
+	mark = action_item->conf;
+	if (mark) {
+		mark_id = tfp_cpu_to_be_32(mark->id);
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_MARK],
+		       &mark_id, BNXT_ULP_ACT_PROP_SZ_MARK);
+
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_MARK);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Mark arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action *action_item,
+			struct ulp_rte_act_bitmap *act,
+			struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	const struct rte_flow_action_rss *rss;
+
+	rss = action_item->conf;
+	if (rss) {
+		/* Update the hdr_bitmap with vxlan */
+		ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_RSS);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: RSS arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *ap)
+{
+	const struct rte_flow_action_vxlan_encap *vxlan_encap;
+	const struct rte_flow_item *item;
+	const struct rte_flow_item_eth *eth_spec;
+	const struct rte_flow_item_ipv4 *ipv4_spec;
+	const struct rte_flow_item_ipv6 *ipv6_spec;
+	struct rte_flow_item_vxlan vxlan_spec;
+	uint32_t vlan_num = 0, vlan_size = 0;
+	uint32_t ip_size = 0, ip_type = 0;
+	uint32_t vxlan_size = 0;
+	uint8_t *buff;
+	/* IP header per byte - ver/hlen, TOS, ID, ID, FRAG, FRAG, TTL, PROTO */
+	const uint8_t	def_ipv4_hdr[] = {0x45, 0x00, 0x00, 0x01, 0x00,
+				    0x00, 0x40, 0x11};
+
+	vxlan_encap = action_item->conf;
+	if (!vxlan_encap) {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan_encap arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	item = vxlan_encap->definition;
+	if (!item) {
+		BNXT_TF_DBG(ERR, "Parse Error: definition arg is invalid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!ulp_rte_item_skip_void(&item, 0))
+		return BNXT_TF_RC_ERROR;
+
+	/* must have ethernet header */
+	if (item->type != RTE_FLOW_ITEM_TYPE_ETH) {
+		BNXT_TF_DBG(ERR, "Parse Error:vxlan encap does not have eth\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	eth_spec = item->spec;
+	buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC];
+	ulp_encap_buffer_copy(buff,
+			      eth_spec->dst.addr_bytes,
+			      BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC);
+
+	/* Goto the next item */
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* May have vlan header */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG];
+		ulp_encap_buffer_copy(buff,
+				      item->spec,
+				      sizeof(struct rte_flow_item_vlan));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+
+	/* may have two vlan headers */
+	if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) {
+		vlan_num++;
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG +
+		       sizeof(struct rte_flow_item_vlan)],
+		       item->spec,
+		       sizeof(struct rte_flow_item_vlan));
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	}
+	/* Update the vlan count and size of more than one */
+	if (vlan_num) {
+		vlan_size = vlan_num * sizeof(struct rte_flow_item_vlan);
+		vlan_num = tfp_cpu_to_be_32(vlan_num);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_NUM],
+		       &vlan_num,
+		       sizeof(uint32_t));
+		vlan_size = tfp_cpu_to_be_32(vlan_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG_SZ],
+		       &vlan_size,
+		       sizeof(uint32_t));
+	}
+
+	/* L3 must be IPv4, IPv6 */
+	if (item->type == RTE_FLOW_ITEM_TYPE_IPV4) {
+		ipv4_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV4_SIZE;
+
+		/* copy the ipv4 details */
+		if (ulp_buffer_is_empty(&ipv4_spec->hdr.version_ihl,
+					BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS)) {
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      def_ipv4_hdr,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		} else {
+			const uint8_t *tmp_buff;
+
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
+			ulp_encap_buffer_copy(buff,
+					      &ipv4_spec->hdr.version_ihl,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS);
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+			     BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS];
+			tmp_buff = (const uint8_t *)&ipv4_spec->hdr.packet_id;
+			ulp_encap_buffer_copy(buff,
+					      tmp_buff,
+					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+		}
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+		    BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
+		    BNXT_ULP_ENCAP_IPV4_ID_PROTO];
+		ulp_encap_buffer_copy(buff,
+				      (const uint8_t *)&ipv4_spec->hdr.dst_addr,
+				      BNXT_ULP_ENCAP_IPV4_DEST_IP);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		/* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV4);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else if (item->type == RTE_FLOW_ITEM_TYPE_IPV6) {
+		ipv6_spec = item->spec;
+		ip_size = BNXT_ULP_ENCAP_IPV6_SIZE;
+
+		/* copy the ipv4 details */
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP],
+		       ipv6_spec, BNXT_ULP_ENCAP_IPV6_SIZE);
+
+		/* Update the ip size details */
+		ip_size = tfp_cpu_to_be_32(ip_size);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
+		       &ip_size, sizeof(uint32_t));
+
+		 /* update the ip type */
+		ip_type = rte_cpu_to_be_32(BNXT_ULP_ETH_IPV6);
+		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L3_TYPE],
+		       &ip_type, sizeof(uint32_t));
+
+		if (!ulp_rte_item_skip_void(&item, 1))
+			return BNXT_TF_RC_ERROR;
+	} else {
+		BNXT_TF_DBG(ERR, "Parse Error: Vxlan Encap expects L3 hdr\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* L4 is UDP */
+	if (item->type != RTE_FLOW_ITEM_TYPE_UDP) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have udp\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	/* copy the udp details */
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP],
+			      item->spec, BNXT_ULP_ENCAP_UDP_SIZE);
+
+	if (!ulp_rte_item_skip_void(&item, 1))
+		return BNXT_TF_RC_ERROR;
+
+	/* Finally VXLAN */
+	if (item->type != RTE_FLOW_ITEM_TYPE_VXLAN) {
+		BNXT_TF_DBG(ERR, "vxlan encap does not have vni\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	vxlan_size = sizeof(struct rte_flow_item_vxlan);
+	/* copy the vxlan details */
+	memcpy(&vxlan_spec, item->spec, vxlan_size);
+	vxlan_spec.flags = 0x08;
+	ulp_encap_buffer_copy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN],
+			      (const uint8_t *)&vxlan_spec,
+			      vxlan_size);
+	vxlan_size = tfp_cpu_to_be_32(vxlan_size);
+	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
+	       &vxlan_size, sizeof(uint32_t));
+
+	/*update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_ENCAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action *action_item
+				__rte_unused,
+				struct ulp_rte_act_bitmap *act,
+				struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* update the hdr_bitmap with vxlan */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_DECAP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action *action_item __rte_unused,
+			 struct ulp_rte_act_bitmap *act,
+			 struct ulp_rte_act_prop *act_prop __rte_unused)
+{
+	/* Update the hdr_bitmap with drop */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_DROP);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action *action_item,
+			  struct ulp_rte_act_bitmap *act,
+			  struct ulp_rte_act_prop *act_prop __rte_unused)
+
+{
+	const struct rte_flow_action_count *act_count;
+
+	act_count = action_item->conf;
+	if (act_count) {
+		if (act_count->shared) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:Shared count not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_COUNT],
+		       &act_count->id,
+		       BNXT_ULP_ACT_PROP_SZ_COUNT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_COUNT);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	uint8_t *svif_buf;
+	uint8_t *vnic_buffer;
+	uint32_t svif;
+
+	/* Update the hdr_bitmap with vnic bit */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+
+	/* copy the PF of the current device into VNIC Property */
+	svif_buf = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_read(svif_buf, &svif);
+	vnic_buffer = &act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC];
+	ulp_util_field_int_write(vnic_buffer, svif);
+
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
+		       struct ulp_rte_act_bitmap *act,
+		       struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_vf *vf_action;
+
+	vf_action = action_item->conf;
+	if (vf_action) {
+		if (vf_action->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Error:VF Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using VF conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &vf_action->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
+			    struct ulp_rte_act_bitmap *act,
+			    struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_port_id *port_id;
+
+	port_id = act_item->conf;
+	if (port_id) {
+		if (port_id->original) {
+			BNXT_TF_DBG(ERR,
+				    "ParseErr:Portid Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		/* TBD: Update the computed VNIC using port conversion */
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &port_id->id,
+		       BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VNIC);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
+			     struct ulp_rte_act_bitmap *act,
+			     struct ulp_rte_act_prop *act_prop)
+{
+	const struct rte_flow_action_phy_port *phy_port;
+
+	phy_port = action_item->conf;
+	if (phy_port) {
+		if (phy_port->original) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Err:Port Original not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act_prop->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &phy_port->index,
+		       BNXT_ULP_ACT_PROP_SZ_VPORT);
+	}
+
+	/* Update the hdr_bitmap with count */
+	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VPORT);
+	return BNXT_TF_RC_SUCCESS;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 3a7845d..0ab43d2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -12,6 +12,14 @@
 #include "ulp_template_db.h"
 #include "ulp_template_struct.h"
 
+/* defines to be used in the tunnel header parsing */
+#define BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS	2
+#define BNXT_ULP_ENCAP_IPV4_ID_PROTO		6
+#define BNXT_ULP_ENCAP_IPV4_DEST_IP		4
+#define BNXT_ULP_ENCAP_IPV4_SIZE		12
+#define BNXT_ULP_ENCAP_IPV6_SIZE		8
+#define BNXT_ULP_ENCAP_UDP_SIZE			4
+
 /*
  * Function to handle the parsing of RTE Flows and placing
  * the RTE flow items into the ulp structures.
@@ -21,6 +29,15 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 			      struct ulp_rte_hdr_bitmap *hdr_bitmap,
 			      struct ulp_rte_hdr_field  *hdr_field);
 
+/*
+ * Function to handle the parsing of RTE Flows and placing
+ * the RTE flow actions into the ulp structures.
+ */
+int32_t
+bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action	actions[],
+			      struct ulp_rte_act_bitmap		*act_bitmap,
+			      struct ulp_rte_act_prop		*act_prop);
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 int32_t
 ulp_rte_pf_hdr_handler(const struct rte_flow_item	*item,
@@ -45,7 +62,7 @@ ulp_rte_port_id_hdr_handler(const struct rte_flow_item	*item,
 			    uint32_t			*field_idx,
 			    uint32_t			*vlan_idx);
 
-/* Function to handle the parsing of RTE Flow item port id Header. */
+/* Function to handle the parsing of RTE Flow item port Header. */
 int32_t
 ulp_rte_phy_port_hdr_handler(const struct rte_flow_item	*item,
 			     struct ulp_rte_hdr_bitmap	*hdr_bitmap,
@@ -117,4 +134,70 @@ ulp_rte_void_hdr_handler(const struct rte_flow_item	*item,
 			 uint32_t			*field_idx,
 			 uint32_t			*vlan_idx);
 
+/* Function to handle the parsing of RTE Flow action void Header. */
+int32_t
+ulp_rte_void_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action RSS Header. */
+int32_t
+ulp_rte_rss_act_handler(const struct rte_flow_action	*action_item,
+			struct ulp_rte_act_bitmap	*act,
+			struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action Mark Header. */
+int32_t
+ulp_rte_mark_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action vxlan_encap Header. */
+int32_t
+ulp_rte_vxlan_decap_act_handler(const struct rte_flow_action	*action_item,
+				struct ulp_rte_act_bitmap	*act,
+				struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action drop Header. */
+int32_t
+ulp_rte_drop_act_handler(const struct rte_flow_action	*action_item,
+			 struct ulp_rte_act_bitmap	*act,
+			 struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action count. */
+int32_t
+ulp_rte_count_act_handler(const struct rte_flow_action	*action_item,
+			  struct ulp_rte_act_bitmap	*act,
+			  struct ulp_rte_act_prop	*act_prop);
+
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action VF. */
+int32_t
+ulp_rte_vf_act_handler(const struct rte_flow_action	*action_item,
+		       struct ulp_rte_act_bitmap	*act,
+		       struct ulp_rte_act_prop		*act_prop);
+
+/* Function to handle the parsing of RTE Flow action port_id. */
+int32_t
+ulp_rte_port_id_act_handler(const struct rte_flow_action	*act_item,
+			    struct ulp_rte_act_bitmap		*act,
+			    struct ulp_rte_act_prop		*act_p);
+
+/* Function to handle the parsing of RTE Flow action phy_port. */
+int32_t
+ulp_rte_phy_port_act_handler(const struct rte_flow_action	*action_item,
+			     struct ulp_rte_act_bitmap		*act,
+			     struct ulp_rte_act_prop		*act_prop);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.c b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
index 6c214b2..411f1e3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.c
@@ -96,6 +96,205 @@ uint32_t ulp_act_prop_map_table[] = {
 		BNXT_ULP_ACT_PROP_SZ_LAST
 };
 
+struct bnxt_ulp_rte_act_info ulp_act_info[] = {
+	[RTE_FLOW_ACTION_TYPE_END] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_END,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VOID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_void_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PASSTHRU] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_JUMP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MARK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_mark_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_FLAG] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_QUEUE] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DROP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_drop_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_COUNT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_count_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_RSS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_rss_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_pf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VF] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vf_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PHY_PORT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_phy_port_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_PORT_ID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_port_id_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_METER] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SECURITY] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_MPLS_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_DEC_NW_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_OUT] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_COPY_TTL_IN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_OF_PUSH_MPLS] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_encap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_VXLAN_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_vxlan_decap_act_handler
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_NVGRE_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_ENCAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_RAW_DECAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV4_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_IPV6_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TP_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_MAC_SWAP] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_TTL] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_SRC] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_SET_MAC_DST] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_INC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	},
+	[RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK] = {
+		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
+		.proto_act_func          = NULL
+	}
+};
+
 struct bnxt_ulp_device_params ulp_device_params[] = {
 	[BNXT_ULP_DEVICE_ID_WH_PLUS] = {
 		.global_fid_enable       = BNXT_ULP_SYM_YES,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db.h b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
index 906b542..dfab266 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db.h
@@ -74,6 +74,13 @@ enum bnxt_ulp_hdr_bit {
 	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000200000
 };
 
+enum bnxt_ulp_act_type {
+	BNXT_ULP_ACT_TYPE_NOT_SUPPORTED = 0,
+	BNXT_ULP_ACT_TYPE_SUPPORTED = 1,
+	BNXT_ULP_ACT_TYPE_END = 2,
+	BNXT_ULP_ACT_TYPE_LAST = 3
+};
+
 enum bnxt_ulp_byte_order {
 	BNXT_ULP_BYTE_ORDER_BE,
 	BNXT_ULP_BYTE_ORDER_LE,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index 0699634..47c0dd8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -72,6 +72,19 @@ struct ulp_rte_act_prop {
 	uint8_t	act_details[BNXT_ULP_ACT_PROP_IDX_LAST];
 };
 
+/* Flow Parser Action Information Structure */
+struct bnxt_ulp_rte_act_info {
+	enum bnxt_ulp_act_type					act_type;
+	/* Flow Parser Protocol Action Function Prototype */
+	int32_t (*proto_act_func)
+		(const struct rte_flow_action			*action_item,
+		struct ulp_rte_act_bitmap			*act_bitmap,
+		struct ulp_rte_act_prop				*act_prop);
+};
+
+/* Flow Parser Action Information Structure Array defined in template source*/
+extern struct bnxt_ulp_rte_act_info	ulp_act_info[];
+
 /* Flow Matcher structures */
 struct bnxt_ulp_header_match_info {
 	struct ulp_rte_hdr_bitmap		hdr_bitmap;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 27/34] net/bnxt: add support for rte flow create driver hook
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (25 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
                         ` (9 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, calls ulp_mapper_flow_create to program
   key & action tables

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile               |   1 +
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 177 ++++++++++++++++++++++++++++++++
 2 files changed, 178 insertions(+)
 create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index 5e2d751..5ed33cc 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -65,6 +65,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_utils.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_mapper.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_matcher.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/ulp_rte_parser.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_ulp/bnxt_ulp_flow.c
 
 #
 # Export include files
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
new file mode 100644
index 0000000..6402dd3
--- /dev/null
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2014-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include "bnxt.h"
+#include "bnxt_tf_common.h"
+#include "ulp_rte_parser.h"
+#include "ulp_matcher.h"
+#include "ulp_flow_db.h"
+#include "ulp_mapper.h"
+#include <rte_malloc.h>
+
+static int32_t
+bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr,
+			    const struct rte_flow_item pattern[],
+			    const struct rte_flow_action actions[],
+			    struct rte_flow_error *error)
+{
+	/* Perform the validation of the arguments for null */
+	if (!error)
+		return BNXT_TF_RC_ERROR;
+
+	if (!pattern) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ITEM_NUM,
+				   NULL,
+				   "NULL pattern.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!actions) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+				   NULL,
+				   "NULL action.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (!attr) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   NULL,
+				   "NULL attribute.");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (attr->egress && attr->ingress) {
+		rte_flow_error_set(error,
+				   EINVAL,
+				   RTE_FLOW_ERROR_TYPE_ATTR,
+				   attr,
+				   "EGRESS AND INGRESS UNSUPPORTED");
+		return BNXT_TF_RC_ERROR;
+	}
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to create the rte flow. */
+static struct rte_flow *
+bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
+		     const struct rte_flow_attr		*attr,
+		     const struct rte_flow_item		pattern[],
+		     const struct rte_flow_action	actions[],
+		     struct rte_flow_error		*error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	uint32_t app_priority;
+	int ret;
+	struct bnxt_ulp_context *ulp_ctx = NULL;
+	uint32_t vnic;
+	uint8_t svif;
+	struct rte_flow *flow_id;
+	uint32_t fid;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return NULL;
+	}
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		return NULL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	svif = bnxt_get_svif(dev->data->port_id, false);
+	BNXT_TF_DBG(ERR, "SVIF for port[%d][port]=0x%08x\n",
+		    dev->data->port_id, svif);
+
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].size = sizeof(svif);
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].spec[0] = svif;
+	hdr_field[BNXT_ULP_HDR_FIELD_SVIF_INDEX].mask[0] = -1;
+	ULP_BITMAP_SET(hdr_bitmap.bits, BNXT_ULP_HDR_BIT_SVIF);
+
+	/*
+	 * VNIC is being pushed as 32bit and the pop will take care of
+	 * proper size
+	 */
+	vnic = (uint32_t)bnxt_get_vnic_id(dev->data->port_id);
+	vnic = htonl(vnic);
+	rte_memcpy(&act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		   &vnic, BNXT_ULP_ACT_PROP_SZ_VNIC);
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	app_priority = attr->priority;
+	/* call the ulp mapper to create the flow in the hardware */
+	ret = ulp_mapper_flow_create(ulp_ctx,
+				     app_priority,
+				     &hdr_bitmap,
+				     hdr_field,
+				     &act_bitmap,
+				     &act_prop,
+				     class_id,
+				     act_tmpl,
+				     &fid);
+	if (!ret) {
+		flow_id = (struct rte_flow *)((uintptr_t)fid);
+		return flow_id;
+	}
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to create flow.");
+	return NULL;
+}
+
+const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
+	.validate = NULL,
+	.create = bnxt_ulp_flow_create,
+	.destroy = NULL,
+	.flush = NULL,
+	.query = NULL,
+	.isolate = NULL
+};
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 28/34] net/bnxt: add support for rte flow validate driver hook
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (26 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
                         ` (8 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Validates rte_flow_create arguments
2. Parses rte_flow_item types
3. Parses rte_flow_action types
4. Calls ulp_matcher_pattern_match to see if the flow is supported
5. If there is a match, returns success otherwise failure

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 67 ++++++++++++++++++++++++++++++++-
 1 file changed, 66 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 6402dd3..490b2ba 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -167,8 +167,73 @@ bnxt_ulp_flow_create(struct rte_eth_dev			*dev,
 	return NULL;
 }
 
+/* Function to validate the rte flow. */
+static int
+bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
+		       const struct rte_flow_attr *attr,
+		       const struct rte_flow_item pattern[],
+		       const struct rte_flow_action actions[],
+		       struct rte_flow_error *error)
+{
+	struct ulp_rte_hdr_bitmap hdr_bitmap;
+	struct ulp_rte_hdr_field hdr_field[BNXT_ULP_PROTO_HDR_MAX];
+	struct ulp_rte_act_bitmap act_bitmap;
+	struct ulp_rte_act_prop act_prop;
+	enum ulp_direction_type dir = ULP_DIR_INGRESS;
+	uint32_t class_id, act_tmpl;
+	int ret;
+
+	if (bnxt_ulp_flow_validate_args(attr,
+					pattern, actions,
+					error) == BNXT_TF_RC_ERROR) {
+		BNXT_TF_DBG(ERR, "Invalid arguments being passed\n");
+		return -EINVAL;
+	}
+
+	/* clear the header bitmap and field structure */
+	memset(&hdr_bitmap, 0, sizeof(struct ulp_rte_hdr_bitmap));
+	memset(hdr_field, 0, sizeof(hdr_field));
+	memset(&act_bitmap, 0, sizeof(act_bitmap));
+	memset(&act_prop, 0, sizeof(act_prop));
+
+	/* Parse the rte flow pattern */
+	ret = bnxt_ulp_rte_parser_hdr_parse(pattern,
+					    &hdr_bitmap,
+					    hdr_field);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* Parse the rte flow action */
+	ret = bnxt_ulp_rte_parser_act_parse(actions,
+					    &act_bitmap,
+					    &act_prop);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	if (attr->egress)
+		dir = ULP_DIR_EGRESS;
+
+	ret = ulp_matcher_pattern_match(dir, &hdr_bitmap, hdr_field,
+					&act_bitmap, &class_id);
+
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	ret = ulp_matcher_action_match(dir, &act_bitmap, &act_tmpl);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
+	/* all good return success */
+	return ret;
+
+parse_error:
+	rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+			   "Failed to validate flow.");
+	return -EINVAL;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
-	.validate = NULL,
+	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = NULL,
 	.flush = NULL,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 29/34] net/bnxt: add support for rte flow destroy driver hook
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (27 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
                         ` (7 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the flow associated with the flow id from the flow table
3. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with that flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 490b2ba..35099a3 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -232,10 +232,40 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev __rte_unused,
 	return -EINVAL;
 }
 
+/* Function to destroy the rte flow. */
+static int
+bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
+		      struct rte_flow *flow,
+		      struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct bnxt_ulp_context *ulp_ctx;
+	uint32_t fid;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+		return -EINVAL;
+	}
+
+	fid = (uint32_t)(uintptr_t)flow;
+
+	ret = ulp_mapper_flow_destroy(ulp_ctx, fid);
+	if (ret)
+		rte_flow_error_set(error, -ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to destroy flow.");
+
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
-	.destroy = NULL,
+	.destroy = bnxt_ulp_flow_destroy,
 	.flush = NULL,
 	.query = NULL,
 	.isolate = NULL
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 30/34] net/bnxt: add support for rte flow flush driver hook
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (28 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
                         ` (6 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Venkat Duvvuru

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

This patch does the following
1. Gets the ulp session information from eth_dev
2. Fetches the rte_flow table associated with this session
3. Iterates through all the flows in the flow table
4. Calls ulp_mapper_resources_free which releases the key & action
   tables associated with each flow

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c      |  3 ++
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c | 33 +++++++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_flow_db.c   | 69 +++++++++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_flow_db.h   | 11 ++++++
 4 files changed, 115 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 3795c6d..56e08f2 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -517,6 +517,9 @@ bnxt_ulp_deinit(struct bnxt *bp)
 	if (!session)
 		return;
 
+	/* clean up regular flows */
+	ulp_flow_db_flush_flows(&bp->ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+
 	/* cleanup the eem table scope */
 	ulp_eem_tbl_scope_deinit(bp, &bp->ulp_ctx);
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 35099a3..4958895 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -262,11 +262,42 @@ bnxt_ulp_flow_destroy(struct rte_eth_dev *dev,
 	return ret;
 }
 
+/* Function to destroy the rte flows. */
+static int32_t
+bnxt_ulp_flow_flush(struct rte_eth_dev *eth_dev,
+		    struct rte_flow_error *error)
+{
+	struct bnxt_ulp_context *ulp_ctx;
+	int32_t ret;
+	struct bnxt *bp;
+
+	ulp_ctx = bnxt_ulp_eth_dev_ptr2_cntxt_get(eth_dev);
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "ULP context is not initialized\n");
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+		return -EINVAL;
+	}
+	bp = eth_dev->data->dev_private;
+
+	/* Free the resources for the last device */
+	if (!ulp_ctx_deinit_allowed(bp))
+		return 0;
+
+	ret = ulp_flow_db_flush_flows(ulp_ctx, BNXT_ULP_REGULAR_FLOW_TABLE);
+	if (ret)
+		rte_flow_error_set(error, ret,
+				   RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
+				   "Failed to flush flow.");
+	return ret;
+}
+
 const struct rte_flow_ops bnxt_ulp_rte_flow_ops = {
 	.validate = bnxt_ulp_flow_validate,
 	.create = bnxt_ulp_flow_create,
 	.destroy = bnxt_ulp_flow_destroy,
-	.flush = NULL,
+	.flush = bnxt_ulp_flow_flush,
 	.query = NULL,
 	.isolate = NULL
 };
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
index ee703a1..aed5078 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.c
@@ -555,3 +555,72 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 	/* all good, return success */
 	return 0;
 }
+
+/** Get the flow database entry iteratively
+ *
+ * flow_tbl [in] Ptr to flow table
+ * fid [in/out] The index to the flow entry
+ *
+ * returns 0 on success and negative on failure.
+ */
+static int32_t
+ulp_flow_db_next_entry_get(struct bnxt_ulp_flow_tbl	*flowtbl,
+			   uint32_t			*fid)
+{
+	uint32_t	lfid = *fid;
+	uint32_t	idx;
+	uint64_t	bs;
+
+	do {
+		lfid++;
+		if (lfid >= flowtbl->num_flows)
+			return -ENOENT;
+		idx = lfid / ULP_INDEX_BITMAP_SIZE;
+		while (!(bs = flowtbl->active_flow_tbl[idx])) {
+			idx++;
+			if ((idx * ULP_INDEX_BITMAP_SIZE) >= flowtbl->num_flows)
+				return -ENOENT;
+		}
+		lfid = (idx * ULP_INDEX_BITMAP_SIZE) + __builtin_clzl(bs);
+		if (*fid >= lfid) {
+			BNXT_TF_DBG(ERR, "Flow Database is corrupt\n");
+			return -ENOENT;
+		}
+	} while (!ulp_flow_db_active_flow_is_set(flowtbl, lfid));
+
+	/* all good, return success */
+	*fid = lfid;
+	return 0;
+}
+
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx)
+{
+	uint32_t			fid = 0;
+	struct bnxt_ulp_flow_db		*flow_db;
+	struct bnxt_ulp_flow_tbl	*flow_tbl;
+
+	if (!ulp_ctx) {
+		BNXT_TF_DBG(ERR, "Invalid Argument\n");
+		return -EINVAL;
+	}
+
+	flow_db = bnxt_ulp_cntxt_ptr2_flow_db_get(ulp_ctx);
+	if (!flow_db) {
+		BNXT_TF_DBG(ERR, "Flow database not found\n");
+		return -EINVAL;
+	}
+	flow_tbl = &flow_db->flow_tbl[idx];
+	while (!ulp_flow_db_next_entry_get(flow_tbl, &fid))
+		(void)ulp_mapper_resources_free(ulp_ctx, fid, idx);
+
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
index eb5effa..5435415 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_flow_db.h
@@ -142,4 +142,15 @@ int32_t	ulp_flow_db_fid_free(struct bnxt_ulp_context		*ulp_ctxt,
 			     enum bnxt_ulp_flow_db_tables	tbl_idx,
 			     uint32_t				fid);
 
+/*
+ * Flush all flows in the flow database.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * tbl_idx [in] The index to table
+ *
+ * returns 0 on success or negative number on failure
+ */
+int32_t	ulp_flow_db_flush_flows(struct bnxt_ulp_context *ulp_ctx,
+				uint32_t		idx);
+
 #endif /* _ULP_FLOW_DB_H_ */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 31/34] net/bnxt: register tf rte flow ops
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (29 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
                         ` (5 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Register bnxt_ulp_rte_flow_ops when host based TRUFLOW is
enabled.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 1 +
 drivers/net/bnxt/bnxt_ethdev.c | 6 +++++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index cd20740..a70cdff 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -731,6 +731,7 @@ extern int bnxt_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	  PMD_DRV_LOG_RAW(level, fmt, ## args)
 
+extern const struct rte_flow_ops bnxt_ulp_rte_flow_ops;
 int32_t bnxt_ulp_init(struct bnxt *bp);
 void bnxt_ulp_deinit(struct bnxt *bp);
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2f08921..783e6a4 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3288,6 +3288,7 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 		    enum rte_filter_type filter_type,
 		    enum rte_filter_op filter_op, void *arg)
 {
+	struct bnxt *bp = dev->data->dev_private;
 	int ret = 0;
 
 	ret = is_bnxt_in_error(dev->data->dev_private);
@@ -3311,7 +3312,10 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 	case RTE_ETH_FILTER_GENERIC:
 		if (filter_op != RTE_ETH_FILTER_GET)
 			return -EINVAL;
-		*(const void **)arg = &bnxt_flow_ops;
+		if (bp->truflow)
+			*(const void **)arg = &bnxt_ulp_rte_flow_ops;
+		else
+			*(const void **)arg = &bnxt_flow_ops;
 		break;
 	default:
 		PMD_DRV_LOG(ERR,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (30 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
                         ` (4 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

If bp->truflow is not set then don't enable vector mode.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 783e6a4..5d5b8e0 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -788,7 +788,8 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 		DEV_RX_OFFLOAD_TCP_CKSUM |
 		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
 		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_VLAN_FILTER))) {
+		DEV_RX_OFFLOAD_VLAN_FILTER)) &&
+	    !bp->truflow) {
 		PMD_DRV_LOG(INFO, "Using vector mode receive for port %d\n",
 			    eth_dev->data->port_id);
 		bp->flags |= BNXT_FLAG_RX_VECTOR_PKT_MODE;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 33/34] net/bnxt: add support for injecting mark into packet’s mbuf
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (31 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
                         ` (3 subsequent siblings)
  36 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru, Mike Baucom

When a flow is offloaded with MARK action (RTE_FLOW_ACTION_TYPE_MARK),
each packet of that flow will have metadata set in its completion.
This metadata will be used to fetch an index into a mark table where
the actual MARK for that flow is stored. Fetch the MARK from the mark
table and inject it into packet’s mbuf.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Lance Richardson <lance.richardson@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c            | 153 ++++++++++++++++++++++++---------
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c |  55 +++++++++++-
 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h |  18 ++++
 3 files changed, 183 insertions(+), 43 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index bef9720..40da2f2 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -20,6 +20,9 @@
 #include "bnxt_hwrm.h"
 #endif
 
+#include <bnxt_tf_common.h>
+#include <ulp_mark_mgr.h>
+
 /*
  * RX Ring handling
  */
@@ -399,6 +402,109 @@ bnxt_get_rx_ts_thor(struct bnxt *bp, uint32_t rx_ts_cmpl)
 }
 #endif
 
+static void
+bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
+			  struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code;
+	uint32_t meta_fmt;
+	uint32_t meta;
+	uint32_t eem = 0;
+	uint32_t mark_id;
+	uint32_t flags2;
+	int rc;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	flags2 = rte_le_to_cpu_32(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			    BNXT_CFA_META_FMT_SHFT;
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+
+		eem = meta_fmt == BNXT_CFA_META_FMT_EEM;
+
+		/* For EEM flows, The first part of cfa_code is 16 bits.
+		 * The second part is embedded in the
+		 * metadata field from bit 19 onwards. The driver needs to
+		 * ignore the first 19 bits of metadata and use the next 12
+		 * bits as higher 12 bits of cfa_code.
+		 */
+		if (eem)
+			cfa_code |= meta << BNXT_CFA_CODE_META_SHIFT;
+	}
+
+	if (cfa_code) {
+		mbuf->hash.fdir.hi = 0;
+		mbuf->hash.fdir.id = 0;
+		if (eem)
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, true,
+						  cfa_code, &mark_id);
+		else
+			rc = ulp_mark_db_mark_get(&bp->ulp_ctx, false,
+						  cfa_code, &mark_id);
+		/* If the above fails, simply return and don't add the mark to
+		 * mbuf
+		 */
+		if (rc)
+			return;
+
+		mbuf->hash.fdir.hi	= mark_id;
+		mbuf->udata64		= (cfa_code & 0xffffffffull) << 32;
+		mbuf->hash.fdir.id	= rxcmp1->cfa_code;
+		mbuf->ol_flags		|= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+	}
+}
+
+void bnxt_set_mark_in_mbuf(struct bnxt *bp,
+			   struct rx_pkt_cmpl_hi *rxcmp1,
+			   struct rte_mbuf *mbuf)
+{
+	uint32_t cfa_code = 0;
+	uint8_t meta_fmt = 0;
+	uint16_t flags2 = 0;
+	uint32_t meta =  0;
+
+	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
+	if (!cfa_code)
+		return;
+
+	if (cfa_code && !bp->mark_table[cfa_code].valid)
+		return;
+
+	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp1->metadata);
+	if (meta) {
+		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
+
+		/* The flags field holds extra bits of info from [6:4]
+		 * which indicate if the flow is in TCAM or EM or EEM
+		 */
+		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
+			   BNXT_CFA_META_FMT_SHFT;
+
+		/* meta_fmt == 4 => 'b100 => 'b10x => EM.
+		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
+		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
+		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
+		 */
+		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
+	}
+
+	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
+	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+}
+
 static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 			    struct bnxt_rx_queue *rxq, uint32_t *raw_cons)
 {
@@ -415,6 +521,7 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	uint16_t cmp_type;
 	uint32_t flags2_f = 0;
 	uint16_t flags_type;
+	struct bnxt *bp = rxq->bp;
 
 	rxcmp = (struct rx_pkt_cmpl *)
 	    &cpr->cp_desc_ring[cp_cons];
@@ -490,7 +597,10 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		mbuf->ol_flags |= PKT_RX_RSS_HASH;
 	}
 
-	bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+	if (bp->truflow)
+		bnxt_ulp_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
+	else
+		bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf);
 
 #ifdef RTE_LIBRTE_IEEE1588
 	if (unlikely((flags_type & RX_PKT_CMPL_FLAGS_MASK) ==
@@ -896,44 +1006,3 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 
 	return 0;
 }
-
-void bnxt_set_mark_in_mbuf(struct bnxt *bp,
-			   struct rx_pkt_cmpl_hi *rxcmp1,
-			   struct rte_mbuf *mbuf)
-{
-	uint32_t cfa_code = 0;
-	uint8_t meta_fmt =  0;
-	uint16_t flags2 = 0;
-	uint32_t meta =  0;
-
-	cfa_code = rte_le_to_cpu_16(rxcmp1->cfa_code);
-	if (!cfa_code)
-		return;
-
-	if (cfa_code && !bp->mark_table[cfa_code].valid)
-		return;
-
-	flags2 = rte_le_to_cpu_16(rxcmp1->flags2);
-	meta = rte_le_to_cpu_32(rxcmp1->metadata);
-	if (meta) {
-		meta >>= BNXT_RX_META_CFA_CODE_SHIFT;
-
-		/*
-		 * The flags field holds extra bits of info from [6:4]
-		 * which indicate if the flow is in TCAM or EM or EEM
-		 */
-		meta_fmt = (flags2 & BNXT_CFA_META_FMT_MASK) >>
-			   BNXT_CFA_META_FMT_SHFT;
-
-		/*
-		 * meta_fmt == 4 => 'b100 => 'b10x => EM.
-		 * meta_fmt == 5 => 'b101 => 'b10x => EM + VLAN
-		 * meta_fmt == 6 => 'b110 => 'b11x => EEM
-		 * meta_fmt == 7 => 'b111 => 'b11x => EEM + VLAN.
-		 */
-		meta_fmt >>= BNXT_CFA_META_FMT_EM_EEM_SHFT;
-	}
-
-	mbuf->hash.fdir.hi = bp->mark_table[cfa_code].mark_id;
-	mbuf->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
-}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
index 566668e..ad83531 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
@@ -58,7 +58,7 @@ ulp_mark_db_mark_set(struct bnxt_ulp_context *ctxt,
 	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
 
 	if (is_gfid) {
-		BNXT_TF_DBG(ERR, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
+		BNXT_TF_DBG(DEBUG, "Set GFID[0x%0x] = 0x%0x\n", idx, mark);
 
 		mtbl->gfid_tbl[idx].mark_id = mark;
 		mtbl->gfid_tbl[idx].valid = true;
@@ -176,6 +176,59 @@ ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt)
 }
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark)
+{
+	struct bnxt_ulp_mark_tbl *mtbl;
+	uint32_t idx = 0;
+
+	if (!ctxt || !mark)
+		return -EINVAL;
+
+	mtbl = bnxt_ulp_cntxt_ptr2_mark_db_get(ctxt);
+	if (!mtbl) {
+		BNXT_TF_DBG(ERR, "Unable to get Mark Table\n");
+		return -EINVAL;
+	}
+
+	idx = ulp_mark_db_idx_get(is_gfid, fid, mtbl);
+
+	if (is_gfid) {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get GFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->gfid_tbl[idx].mark_id);
+
+		*mark = mtbl->gfid_tbl[idx].mark_id;
+	} else {
+		if (!mtbl->gfid_tbl[idx].valid)
+			return -EINVAL;
+
+		BNXT_TF_DBG(DEBUG, "Get LFID[0x%0x] = 0x%0x\n",
+			    idx, mtbl->lfid_tbl[idx].mark_id);
+
+		*mark = mtbl->lfid_tbl[idx].mark_id;
+	}
+
+	return 0;
+}
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
index f0d1515..0f8a5e5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
@@ -55,6 +55,24 @@ int32_t
 ulp_mark_db_deinit(struct bnxt_ulp_context *ctxt);
 
 /*
+ * Get a Mark from the Mark Manager
+ *
+ * ctxt [in] The ulp context for the mark manager
+ *
+ * is_gfid [in] The type of fid (GFID or LFID)
+ *
+ * fid [in] The flow id that is returned by HW in BD
+ *
+ * mark [out] The mark that is associated with the FID
+ *
+ */
+int32_t
+ulp_mark_db_mark_get(struct bnxt_ulp_context *ctxt,
+		     bool is_gfid,
+		     uint32_t fid,
+		     uint32_t *mark);
+
+/*
  * Adds a Mark to the Mark Manager
  *
  * ctxt [in] The ulp context for the mark manager
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (32 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
@ 2020-04-15  8:19       ` Venkat Duvvuru
  2020-04-22 21:27         ` Thomas Monjalon
  2020-04-15 15:29       ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Ajit Khaparde
                         ` (2 subsequent siblings)
  36 siblings, 1 reply; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:19 UTC (permalink / raw)
  To: dev; +Cc: Venkat Duvvuru

Include tf_ulp & tf_core directories and the files inside them.

Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/meson.build | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 0c311d2..d75f887 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -1,7 +1,12 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Intel Corporation
+# Copyright(c) 2020 Broadcom
 
 install_headers('rte_pmd_bnxt.h')
+
+includes += include_directories('tf_ulp')
+includes += include_directories('tf_core')
+
 sources = files('bnxt_cpr.c',
 	'bnxt_ethdev.c',
 	'bnxt_filter.c',
@@ -16,6 +21,27 @@ sources = files('bnxt_cpr.c',
 	'bnxt_txr.c',
 	'bnxt_util.c',
 	'bnxt_vnic.c',
+
+	'tf_core/tf_core.c',
+	'tf_core/bitalloc.c',
+	'tf_core/tf_msg.c',
+	'tf_core/rand.c',
+	'tf_core/stack.c',
+	'tf_core/tf_em.c',
+	'tf_core/tf_rm.c',
+	'tf_core/tf_tbl.c',
+	'tf_core/tfp.c',
+
+	'tf_ulp/bnxt_ulp.c',
+	'tf_ulp/ulp_mark_mgr.c',
+	'tf_ulp/ulp_flow_db.c',
+	'tf_ulp/ulp_template_db.c',
+	'tf_ulp/ulp_utils.c',
+	'tf_ulp/ulp_mapper.c',
+	'tf_ulp/ulp_matcher.c',
+	'tf_ulp/ulp_rte_parser.c',
+	'tf_ulp/bnxt_ulp_flow.c',
+
 	'rte_pmd_bnxt.c')
 
 if arch_subdir == 'x86'
-- 
2.7.4


^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management
  2020-04-13 21:35   ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Thomas Monjalon
@ 2020-04-15  8:56     ` Venkat Duvvuru
  0 siblings, 0 replies; 154+ messages in thread
From: Venkat Duvvuru @ 2020-04-15  8:56 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev

On Tue, Apr 14, 2020 at 3:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 13/04/2020 21:39, Venkat Duvvuru:
> > This patchset introduces a new mechanism to allow host-memory based
> > flow table management. This should allow higher flow scalability
> > than what is currently supported. This new approach also defines a
> > new rte_flow parser, and mapper which currently supports basic packet
> > classification in receive path. The patchset uses a newly implemented
> > control-plane firmware interface which optimizes flow insertions and
> > deletions.
> >
> > This is a baseline patchset with limited scale. Follow on patches will
> > add support for more protocol headers, rte_flow attributes, actions
> > and such.
>
> It seems this patchset is adding features but I don't see any
> documentation update. Should you update the feature list?
>         doc/guides/nics/features/bnxt.ini

This patchset implements a different mechanism to an already supported
feature in
 doc/guides/nics/features/bnxt.ini. So, I believe that we don't have
to update this file.

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (33 preceding siblings ...)
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
@ 2020-04-15 15:29       ` Ajit Khaparde
  2020-04-16 16:23       ` Ferruh Yigit
  2020-04-16 17:40       ` Ferruh Yigit
  36 siblings, 0 replies; 154+ messages in thread
From: Ajit Khaparde @ 2020-04-15 15:29 UTC (permalink / raw)
  To: Venkat Duvvuru; +Cc: dpdk-dev

On Wed, Apr 15, 2020 at 1:19 AM Venkat Duvvuru <
venkatkumar.duvvuru@broadcom.com> wrote:

> This patchset introduces a new mechanism to allow host-memory based
> flow table management. This should allow higher flow scalability
> than what is currently supported. This new approach also defines a
> new rte_flow parser, and mapper which currently supports basic packet
> classification in receive path. The patchset uses a newly implemented
> control-plane firmware interface which optimizes flow insertions and
> deletions.
>
> This is a baseline patchset with limited scale. Follow on patches will
> add support for more protocol headers, rte_flow attributes, actions
> and such.
>
> This is a tech preview feature, hence disabled by default and can be
> enabled
> using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
>
> v3==>v4
> =======
> 1. Fixed some more compilation issues reported by CI
>

Patchset applied to dpdk-next-net-brcm.



>
> Ajit Kumar Khaparde (1):
>   net/bnxt: add updated dpdk hsi structure
>
> Farah Smith (2):
>   net/bnxt: add tf core identifier support
>   net/bnxt: add tf core table scope support
>
> Kishore Padmanabha (8):
>   net/bnxt: match rte flow items with flow template patterns
>   net/bnxt: match rte flow actions with flow template actions
>   net/bnxt: add support for rte flow item parsing
>   net/bnxt: add support for rte flow action parsing
>   net/bnxt: add support for rte flow create driver hook
>   net/bnxt: add support for rte flow validate driver hook
>   net/bnxt: add support for rte flow destroy driver hook
>   net/bnxt: add support for rte flow flush driver hook
>
> Michael Wildt (4):
>   net/bnxt: add initial tf core session open
>   net/bnxt: add initial tf core session close support
>   net/bnxt: add tf core session sram functions
>   net/bnxt: add resource manager functionality
>
> Mike Baucom (5):
>   net/bnxt: add helper functions for blob/regfile ops
>   net/bnxt: add support to process action tables
>   net/bnxt: add support to process key tables
>   net/bnxt: add support to free key and action tables
>   net/bnxt: add support to alloc and program key and act tbls
>
> Pete Spreadborough (2):
>   net/bnxt: add truflow message handlers
>   net/bnxt: add EM/EEM functionality
>
> Randy Schacher (1):
>   net/bnxt: update hwrm prep to use ptr
>
> Shahaji Bhosle (2):
>   net/bnxt: add initial tf core resource mgmt support
>   net/bnxt: add tf core TCAM support
>
> Venkat Duvvuru (9):
>   net/bnxt: fetch SVIF information from the firmware
>   net/bnxt: fetch vnic info from DPDK port
>   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
>   net/bnxt: add support for ULP session manager init
>   net/bnxt: add support for ULP session manager cleanup
>   net/bnxt: register tf rte flow ops
>   net/bnxt: disable vector mode when host based TRUFLOW is enabled
>   net/bnxt: add support for injecting mark into packet’s mbuf
>   net/bnxt: enable meson build on truflow code
>
>  drivers/net/bnxt/Makefile                       |   24 +
>  drivers/net/bnxt/bnxt.h                         |   21 +-
>  drivers/net/bnxt/bnxt_ethdev.c                  |  118 +-
>  drivers/net/bnxt/bnxt_hwrm.c                    |  319 +-
>  drivers/net/bnxt/bnxt_hwrm.h                    |   19 +
>  drivers/net/bnxt/bnxt_rxr.c                     |  153 +-
>  drivers/net/bnxt/hsi_struct_def_dpdk.h          | 3786
> ++++++++++++++++++++---
>  drivers/net/bnxt/meson.build                    |   26 +
>  drivers/net/bnxt/tf_core/bitalloc.c             |  364 +++
>  drivers/net/bnxt/tf_core/bitalloc.h             |  119 +
>  drivers/net/bnxt/tf_core/hwrm_tf.h              |  992 ++++++
>  drivers/net/bnxt/tf_core/lookup3.h              |  162 +
>  drivers/net/bnxt/tf_core/rand.c                 |   47 +
>  drivers/net/bnxt/tf_core/rand.h                 |   36 +
>  drivers/net/bnxt/tf_core/stack.c                |  107 +
>  drivers/net/bnxt/tf_core/stack.h                |  107 +
>  drivers/net/bnxt/tf_core/tf_core.c              |  659 ++++
>  drivers/net/bnxt/tf_core/tf_core.h              | 1376 ++++++++
>  drivers/net/bnxt/tf_core/tf_em.c                |  515 +++
>  drivers/net/bnxt/tf_core/tf_em.h                |  117 +
>  drivers/net/bnxt/tf_core/tf_ext_flow_handle.h   |  166 +
>  drivers/net/bnxt/tf_core/tf_msg.c               | 1248 ++++++++
>  drivers/net/bnxt/tf_core/tf_msg.h               |  256 ++
>  drivers/net/bnxt/tf_core/tf_msg_common.h        |   47 +
>  drivers/net/bnxt/tf_core/tf_project.h           |   24 +
>  drivers/net/bnxt/tf_core/tf_resources.h         |  542 ++++
>  drivers/net/bnxt/tf_core/tf_rm.c                | 3297
> ++++++++++++++++++++
>  drivers/net/bnxt/tf_core/tf_rm.h                |  321 ++
>  drivers/net/bnxt/tf_core/tf_session.h           |  300 ++
>  drivers/net/bnxt/tf_core/tf_tbl.c               | 1836 +++++++++++
>  drivers/net/bnxt/tf_core/tf_tbl.h               |  126 +
>  drivers/net/bnxt/tf_core/tfp.c                  |  163 +
>  drivers/net/bnxt/tf_core/tfp.h                  |  188 ++
>  drivers/net/bnxt/tf_ulp/bnxt_tf_common.h        |   54 +
>  drivers/net/bnxt/tf_ulp/bnxt_ulp.c              |  695 +++++
>  drivers/net/bnxt/tf_ulp/bnxt_ulp.h              |  110 +
>  drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c         |  303 ++
>  drivers/net/bnxt/tf_ulp/ulp_flow_db.c           |  626 ++++
>  drivers/net/bnxt/tf_ulp/ulp_flow_db.h           |  156 +
>  drivers/net/bnxt/tf_ulp/ulp_mapper.c            | 1513 +++++++++
>  drivers/net/bnxt/tf_ulp/ulp_mapper.h            |   69 +
>  drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c          |  271 ++
>  drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h          |  111 +
>  drivers/net/bnxt/tf_ulp/ulp_matcher.c           |  188 ++
>  drivers/net/bnxt/tf_ulp/ulp_matcher.h           |   35 +
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.c        | 1208 ++++++++
>  drivers/net/bnxt/tf_ulp/ulp_rte_parser.h        |  203 ++
>  drivers/net/bnxt/tf_ulp/ulp_template_db.c       | 1713 ++++++++++
>  drivers/net/bnxt/tf_ulp/ulp_template_db.h       |  354 +++
>  drivers/net/bnxt/tf_ulp/ulp_template_field_db.h |  130 +
>  drivers/net/bnxt/tf_ulp/ulp_template_struct.h   |  266 ++
>  drivers/net/bnxt/tf_ulp/ulp_utils.c             |  521 ++++
>  drivers/net/bnxt/tf_ulp/ulp_utils.h             |  279 ++
>  53 files changed, 25891 insertions(+), 495 deletions(-)
>  create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c
>  create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h
>  create mode 100644 drivers/net/bnxt/tf_core/hwrm_tf.h
>  create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
>  create mode 100644 drivers/net/bnxt/tf_core/rand.c
>  create mode 100644 drivers/net/bnxt/tf_core/rand.h
>  create mode 100644 drivers/net/bnxt/tf_core/stack.c
>  create mode 100644 drivers/net/bnxt/tf_core/stack.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_core.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_core.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_msg.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_msg.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_msg_common.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_project.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_resources.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_rm.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_rm.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_session.h
>  create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.c
>  create mode 100644 drivers/net/bnxt/tf_core/tf_tbl.h
>  create mode 100644 drivers/net/bnxt/tf_core/tfp.c
>  create mode 100644 drivers/net/bnxt/tf_core/tfp.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_flow_db.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mapper.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_mark_mgr.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_matcher.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_db.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_field_db.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_template_struct.h
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.c
>  create mode 100644 drivers/net/bnxt/tf_ulp/ulp_utils.h
>
> --
> 2.7.4
>
>

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (34 preceding siblings ...)
  2020-04-15 15:29       ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Ajit Khaparde
@ 2020-04-16 16:23       ` Ferruh Yigit
  2020-04-16 16:38         ` Ajit Khaparde
  2020-04-16 17:40       ` Ferruh Yigit
  36 siblings, 1 reply; 154+ messages in thread
From: Ferruh Yigit @ 2020-04-16 16:23 UTC (permalink / raw)
  To: Venkat Duvvuru, Ajit Khaparde; +Cc: dev

On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> This patchset introduces a new mechanism to allow host-memory based
> flow table management. This should allow higher flow scalability
> than what is currently supported. This new approach also defines a
> new rte_flow parser, and mapper which currently supports basic packet
> classification in receive path. The patchset uses a newly implemented
> control-plane firmware interface which optimizes flow insertions and
> deletions.
> 
> This is a baseline patchset with limited scale. Follow on patches will
> add support for more protocol headers, rte_flow attributes, actions
> and such.
> 
> This is a tech preview feature, hence disabled by default and can be enabled
> using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
> 
> v3==>v4
> =======
> 1. Fixed some more compilation issues reported by CI
> 
> Ajit Kumar Khaparde (1):
>   net/bnxt: add updated dpdk hsi structure
> 
> Farah Smith (2):
>   net/bnxt: add tf core identifier support
>   net/bnxt: add tf core table scope support
> 
> Kishore Padmanabha (8):
>   net/bnxt: match rte flow items with flow template patterns
>   net/bnxt: match rte flow actions with flow template actions
>   net/bnxt: add support for rte flow item parsing
>   net/bnxt: add support for rte flow action parsing
>   net/bnxt: add support for rte flow create driver hook
>   net/bnxt: add support for rte flow validate driver hook
>   net/bnxt: add support for rte flow destroy driver hook
>   net/bnxt: add support for rte flow flush driver hook
> 
> Michael Wildt (4):
>   net/bnxt: add initial tf core session open
>   net/bnxt: add initial tf core session close support
>   net/bnxt: add tf core session sram functions
>   net/bnxt: add resource manager functionality
> 
> Mike Baucom (5):
>   net/bnxt: add helper functions for blob/regfile ops
>   net/bnxt: add support to process action tables
>   net/bnxt: add support to process key tables
>   net/bnxt: add support to free key and action tables
>   net/bnxt: add support to alloc and program key and act tbls
> 
> Pete Spreadborough (2):
>   net/bnxt: add truflow message handlers
>   net/bnxt: add EM/EEM functionality
> 
> Randy Schacher (1):
>   net/bnxt: update hwrm prep to use ptr
> 
> Shahaji Bhosle (2):
>   net/bnxt: add initial tf core resource mgmt support
>   net/bnxt: add tf core TCAM support
> 
> Venkat Duvvuru (9):
>   net/bnxt: fetch SVIF information from the firmware
>   net/bnxt: fetch vnic info from DPDK port
>   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
>   net/bnxt: add support for ULP session manager init
>   net/bnxt: add support for ULP session manager cleanup
>   net/bnxt: register tf rte flow ops
>   net/bnxt: disable vector mode when host based TRUFLOW is enabled
>   net/bnxt: add support for injecting mark into packet’s mbuf
>   net/bnxt: enable meson build on truflow code
> 

Can you please update the release notes too?
Also what do you think about updating the PMD documentation for TruFlow?

These can be separate patches and can be merged to next-net separately.

Thanks,
ferruh


^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-16 16:23       ` Ferruh Yigit
@ 2020-04-16 16:38         ` Ajit Khaparde
  0 siblings, 0 replies; 154+ messages in thread
From: Ajit Khaparde @ 2020-04-16 16:38 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Venkat Duvvuru, dpdk-dev

On Thu, Apr 16, 2020 at 9:23 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> > This patchset introduces a new mechanism to allow host-memory based
> > flow table management. This should allow higher flow scalability
> > than what is currently supported. This new approach also defines a
> > new rte_flow parser, and mapper which currently supports basic packet
> > classification in receive path. The patchset uses a newly implemented
> > control-plane firmware interface which optimizes flow insertions and
> > deletions.
> >
> > This is a baseline patchset with limited scale. Follow on patches will
> > add support for more protocol headers, rte_flow attributes, actions
> > and such.
> >
> > This is a tech preview feature, hence disabled by default and can be
> enabled
> > using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
> >
> > v3==>v4
> > =======
> > 1. Fixed some more compilation issues reported by CI
> >
> > Ajit Kumar Khaparde (1):
> >   net/bnxt: add updated dpdk hsi structure
> >
> > Farah Smith (2):
> >   net/bnxt: add tf core identifier support
> >   net/bnxt: add tf core table scope support
> >
> > Kishore Padmanabha (8):
> >   net/bnxt: match rte flow items with flow template patterns
> >   net/bnxt: match rte flow actions with flow template actions
> >   net/bnxt: add support for rte flow item parsing
> >   net/bnxt: add support for rte flow action parsing
> >   net/bnxt: add support for rte flow create driver hook
> >   net/bnxt: add support for rte flow validate driver hook
> >   net/bnxt: add support for rte flow destroy driver hook
> >   net/bnxt: add support for rte flow flush driver hook
> >
> > Michael Wildt (4):
> >   net/bnxt: add initial tf core session open
> >   net/bnxt: add initial tf core session close support
> >   net/bnxt: add tf core session sram functions
> >   net/bnxt: add resource manager functionality
> >
> > Mike Baucom (5):
> >   net/bnxt: add helper functions for blob/regfile ops
> >   net/bnxt: add support to process action tables
> >   net/bnxt: add support to process key tables
> >   net/bnxt: add support to free key and action tables
> >   net/bnxt: add support to alloc and program key and act tbls
> >
> > Pete Spreadborough (2):
> >   net/bnxt: add truflow message handlers
> >   net/bnxt: add EM/EEM functionality
> >
> > Randy Schacher (1):
> >   net/bnxt: update hwrm prep to use ptr
> >
> > Shahaji Bhosle (2):
> >   net/bnxt: add initial tf core resource mgmt support
> >   net/bnxt: add tf core TCAM support
> >
> > Venkat Duvvuru (9):
> >   net/bnxt: fetch SVIF information from the firmware
> >   net/bnxt: fetch vnic info from DPDK port
> >   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
> >   net/bnxt: add support for ULP session manager init
> >   net/bnxt: add support for ULP session manager cleanup
> >   net/bnxt: register tf rte flow ops
> >   net/bnxt: disable vector mode when host based TRUFLOW is enabled
> >   net/bnxt: add support for injecting mark into packet’s mbuf
> >   net/bnxt: enable meson build on truflow code
> >
>
> Can you please update the release notes too?
> Also what do you think about updating the PMD documentation for TruFlow?
>
> These can be separate patches and can be merged to next-net separately.
>
Yes Ferruh. We. do have follow on patches. We will include the release
notes update.
PMD documentation is also in the works. It will be submitted soon.


>
> Thanks,
> ferruh
>
>

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
@ 2020-04-16 17:39         ` Ferruh Yigit
  2020-04-16 17:48           ` Ajit Khaparde
  0 siblings, 1 reply; 154+ messages in thread
From: Ferruh Yigit @ 2020-04-16 17:39 UTC (permalink / raw)
  To: Venkat Duvvuru, dev; +Cc: Michael Wildt

On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> From: Michael Wildt <michael.wildt@broadcom.com>
> 
> - Add TruFlow session and resource support functions
> - Add Truflow session close API and related message support functions
>   for both session and hw resources
> 
> Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>

<...>

> +static inline uint32_t SWAP_WORDS32(uint32_t val32)
> +{
> +	return (((val32 & 0x0000ffff) << 16) |
> +		((val32 & 0xffff0000) >> 16));
> +}
> +

'SWAP_WORDS32()' is not used in this patch and causing a build warning [1], can
you please add this function on the patch it is used?

[1]
.../drivers/net/bnxt/tf_core/tf_core.c:17:24: error: unused function
'SWAP_WORDS32' [-Werror,-Wunused-function]
static inline uint32_t SWAP_WORDS32(uint32_t val32)
                       ^

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open
  2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
@ 2020-04-16 17:39         ` Ferruh Yigit
  2020-04-16 17:47           ` Ajit Khaparde
  0 siblings, 1 reply; 154+ messages in thread
From: Ferruh Yigit @ 2020-04-16 17:39 UTC (permalink / raw)
  To: Venkat Duvvuru, dev; +Cc: Michael Wildt

On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> From: Michael Wildt <michael.wildt@broadcom.com>
> 
> - Add infrastructure support
> - Add tf_core open session support
> 
> Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
> Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>

<...>

> +int
> +tfp_calloc(struct tfp_calloc_parms *parms)
> +{
> +	if (parms == NULL)
> +		return -EINVAL;
> +
> +	parms->mem_va = rte_zmalloc("tf",
> +				    (parms->nitems * parms->size),
> +				    parms->alignment);
> +	if (parms->mem_va == NULL) {
> +		PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
> +		return -ENOMEM;
> +	}
> +
> +	parms->mem_pa = (void *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
> +	if (parms->mem_pa == (void *)RTE_BAD_IOVA) {

This is causing a warning for 32bit icc [1], becuase 64-bit value converted to
32-bit pointer. Can you do the same casting that has been done in one line
above, like [2].

[1]
.../drivers/net/bnxt/tf_core/tfp.c(110): warning #2259: non-pointer conversion
from "rte_iova_t={uint64_t={__uint64_t={unsigned long long}}}" to "void *" may
lose significant bits
        if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
                             ^
[2]
 -       if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
 +       if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
                         ` (35 preceding siblings ...)
  2020-04-16 16:23       ` Ferruh Yigit
@ 2020-04-16 17:40       ` Ferruh Yigit
  2020-04-16 17:51         ` Ajit Khaparde
  36 siblings, 1 reply; 154+ messages in thread
From: Ferruh Yigit @ 2020-04-16 17:40 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: Venkat Duvvuru, dev

On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> This patchset introduces a new mechanism to allow host-memory based
> flow table management. This should allow higher flow scalability
> than what is currently supported. This new approach also defines a
> new rte_flow parser, and mapper which currently supports basic packet
> classification in receive path. The patchset uses a newly implemented
> control-plane firmware interface which optimizes flow insertions and
> deletions.
> 
> This is a baseline patchset with limited scale. Follow on patches will
> add support for more protocol headers, rte_flow attributes, actions
> and such.
> 
> This is a tech preview feature, hence disabled by default and can be enabled
> using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
> 
> v3==>v4
> =======
> 1. Fixed some more compilation issues reported by CI
> 
> Ajit Kumar Khaparde (1):
>   net/bnxt: add updated dpdk hsi structure
> 
> Farah Smith (2):
>   net/bnxt: add tf core identifier support
>   net/bnxt: add tf core table scope support
> 
> Kishore Padmanabha (8):
>   net/bnxt: match rte flow items with flow template patterns
>   net/bnxt: match rte flow actions with flow template actions
>   net/bnxt: add support for rte flow item parsing
>   net/bnxt: add support for rte flow action parsing
>   net/bnxt: add support for rte flow create driver hook
>   net/bnxt: add support for rte flow validate driver hook
>   net/bnxt: add support for rte flow destroy driver hook
>   net/bnxt: add support for rte flow flush driver hook
> 
> Michael Wildt (4):
>   net/bnxt: add initial tf core session open
>   net/bnxt: add initial tf core session close support
>   net/bnxt: add tf core session sram functions
>   net/bnxt: add resource manager functionality
> 
> Mike Baucom (5):
>   net/bnxt: add helper functions for blob/regfile ops
>   net/bnxt: add support to process action tables
>   net/bnxt: add support to process key tables
>   net/bnxt: add support to free key and action tables
>   net/bnxt: add support to alloc and program key and act tbls
> 
> Pete Spreadborough (2):
>   net/bnxt: add truflow message handlers
>   net/bnxt: add EM/EEM functionality
> 
> Randy Schacher (1):
>   net/bnxt: update hwrm prep to use ptr
> 
> Shahaji Bhosle (2):
>   net/bnxt: add initial tf core resource mgmt support
>   net/bnxt: add tf core TCAM support
> 
> Venkat Duvvuru (9):
>   net/bnxt: fetch SVIF information from the firmware
>   net/bnxt: fetch vnic info from DPDK port
>   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
>   net/bnxt: add support for ULP session manager init
>   net/bnxt: add support for ULP session manager cleanup
>   net/bnxt: register tf rte flow ops
>   net/bnxt: disable vector mode when host based TRUFLOW is enabled
>   net/bnxt: add support for injecting mark into packet’s mbuf
>   net/bnxt: enable meson build on truflow code
> 

Hi Ajit,

If there will be a new version, I suggest following commit titles, if they make
sense can you update accordingly?

 net/bnxt: update HSI structure
 net/bnxt: update HWRM prep to use pointer
 net/bnxt: add TruFlow message handlers
 net/bnxt: add initial TruFlow core session open
 net/bnxt: add initial TruFlow core session close
 net/bnxt: add TruFlow core session SRAM
 net/bnxt: add initial TruFlow core resource management
 net/bnxt: add resource manager
 net/bnxt: add TruFlow core identifier
 net/bnxt: support TruFlow core TCAM
 net/bnxt: support TruFlow core table scope
 net/bnxt: support EM/EEM
 net/bnxt: fetch SVIF information from firmware
 net/bnxt: fetch VNIC info
 net/bnxt: support host memory based TruFlow
 net/bnxt: support ULP session manager init
 net/bnxt: support ULP session manager cleanup
 net/bnxt: add helper functions for blob/regfile ops
 net/bnxt: support process action tables
 net/bnxt: support process key tables
 net/bnxt: support freeing key and action tables
 net/bnxt: support alloc and program key and act tables
 net/bnxt: match flow API items with flow template patterns
 net/bnxt: match flow API actions with flow template actions
 net/bnxt: support flow API item parsing
 net/bnxt: support flow API action parsing
 net/bnxt: support flow API create
 net/bnxt: support flow API validate
 net/bnxt: support flow API destroy
 net/bnxt: support flow API flush
 net/bnxt: register TruFlow flow API ops
 net/bnxt: disable vector mode on host based TruFlow
 net/bnxt: support marking packet
 net/bnxt: enable meson build on TruFlow


^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open
  2020-04-16 17:39         ` Ferruh Yigit
@ 2020-04-16 17:47           ` Ajit Khaparde
  0 siblings, 0 replies; 154+ messages in thread
From: Ajit Khaparde @ 2020-04-16 17:47 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Venkat Duvvuru, dpdk-dev, Michael Wildt

On Thu, Apr 16, 2020 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> > From: Michael Wildt <michael.wildt@broadcom.com>
> >
> > - Add infrastructure support
> > - Add tf_core open session support
> >
> > Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
> > Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> > Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
> > Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
>
> <...>
>
> > +int
> > +tfp_calloc(struct tfp_calloc_parms *parms)
> > +{
> > +     if (parms == NULL)
> > +             return -EINVAL;
> > +
> > +     parms->mem_va = rte_zmalloc("tf",
> > +                                 (parms->nitems * parms->size),
> > +                                 parms->alignment);
> > +     if (parms->mem_va == NULL) {
> > +             PMD_DRV_LOG(ERR, "Allocate failed mem_va\n");
> > +             return -ENOMEM;
> > +     }
> > +
> > +     parms->mem_pa = (void
> *)((uintptr_t)rte_mem_virt2iova(parms->mem_va));
> > +     if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
>
> This is causing a warning for 32bit icc [1], becuase 64-bit value
> converted to
> 32-bit pointer. Can you do the same casting that has been done in one line
> above, like [2].
>
> [1]
> .../drivers/net/bnxt/tf_core/tfp.c(110): warning #2259: non-pointer
> conversion
> from "rte_iova_t={uint64_t={__uint64_t={unsigned long long}}}" to "void *"
> may
> lose significant bits
>         if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
>                              ^
> [2]
>  -       if (parms->mem_pa == (void *)RTE_BAD_IOVA) {
>  +       if (parms->mem_pa == (void *)((uintptr_t)RTE_BAD_IOVA)) {
>
Makes sense. Thanks

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support
  2020-04-16 17:39         ` Ferruh Yigit
@ 2020-04-16 17:48           ` Ajit Khaparde
  0 siblings, 0 replies; 154+ messages in thread
From: Ajit Khaparde @ 2020-04-16 17:48 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Venkat Duvvuru, dpdk-dev, Michael Wildt

On Thu, Apr 16, 2020 at 10:39 AM Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> > From: Michael Wildt <michael.wildt@broadcom.com>
> >
> > - Add TruFlow session and resource support functions
> > - Add Truflow session close API and related message support functions
> >   for both session and hw resources
> >
> > Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
> > Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
> > Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
>
> <...>
>
> > +static inline uint32_t SWAP_WORDS32(uint32_t val32)
> > +{
> > +     return (((val32 & 0x0000ffff) << 16) |
> > +             ((val32 & 0xffff0000) >> 16));
> > +}
> > +
>
> 'SWAP_WORDS32()' is not used in this patch and causing a build warning
> [1], can
> you please add this function on the patch it is used?
>
> [1]
> .../drivers/net/bnxt/tf_core/tf_core.c:17:24: error: unused function
> 'SWAP_WORDS32' [-Werror,-Wunused-function]
> static inline uint32_t SWAP_WORDS32(uint32_t val32)
>                        ^
>
Can respin and send with this taken care of. Thanks

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-16 17:40       ` Ferruh Yigit
@ 2020-04-16 17:51         ` Ajit Khaparde
  2020-04-17  8:37           ` Ferruh Yigit
  0 siblings, 1 reply; 154+ messages in thread
From: Ajit Khaparde @ 2020-04-16 17:51 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Venkat Duvvuru, dpdk-dev

On Thu, Apr 16, 2020 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> > This patchset introduces a new mechanism to allow host-memory based
> > flow table management. This should allow higher flow scalability
> > than what is currently supported. This new approach also defines a
> > new rte_flow parser, and mapper which currently supports basic packet
> > classification in receive path. The patchset uses a newly implemented
> > control-plane firmware interface which optimizes flow insertions and
> > deletions.
> >
> > This is a baseline patchset with limited scale. Follow on patches will
> > add support for more protocol headers, rte_flow attributes, actions
> > and such.
> >
> > This is a tech preview feature, hence disabled by default and can be
> enabled
> > using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
> >
> > v3==>v4
> > =======
> > 1. Fixed some more compilation issues reported by CI
> >
> > Ajit Kumar Khaparde (1):
> >   net/bnxt: add updated dpdk hsi structure
> >
> > Farah Smith (2):
> >   net/bnxt: add tf core identifier support
> >   net/bnxt: add tf core table scope support
> >
> > Kishore Padmanabha (8):
> >   net/bnxt: match rte flow items with flow template patterns
> >   net/bnxt: match rte flow actions with flow template actions
> >   net/bnxt: add support for rte flow item parsing
> >   net/bnxt: add support for rte flow action parsing
> >   net/bnxt: add support for rte flow create driver hook
> >   net/bnxt: add support for rte flow validate driver hook
> >   net/bnxt: add support for rte flow destroy driver hook
> >   net/bnxt: add support for rte flow flush driver hook
> >
> > Michael Wildt (4):
> >   net/bnxt: add initial tf core session open
> >   net/bnxt: add initial tf core session close support
> >   net/bnxt: add tf core session sram functions
> >   net/bnxt: add resource manager functionality
> >
> > Mike Baucom (5):
> >   net/bnxt: add helper functions for blob/regfile ops
> >   net/bnxt: add support to process action tables
> >   net/bnxt: add support to process key tables
> >   net/bnxt: add support to free key and action tables
> >   net/bnxt: add support to alloc and program key and act tbls
> >
> > Pete Spreadborough (2):
> >   net/bnxt: add truflow message handlers
> >   net/bnxt: add EM/EEM functionality
> >
> > Randy Schacher (1):
> >   net/bnxt: update hwrm prep to use ptr
> >
> > Shahaji Bhosle (2):
> >   net/bnxt: add initial tf core resource mgmt support
> >   net/bnxt: add tf core TCAM support
> >
> > Venkat Duvvuru (9):
> >   net/bnxt: fetch SVIF information from the firmware
> >   net/bnxt: fetch vnic info from DPDK port
> >   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
> >   net/bnxt: add support for ULP session manager init
> >   net/bnxt: add support for ULP session manager cleanup
> >   net/bnxt: register tf rte flow ops
> >   net/bnxt: disable vector mode when host based TRUFLOW is enabled
> >   net/bnxt: add support for injecting mark into packet’s mbuf
> >   net/bnxt: enable meson build on truflow code
> >
>
> Hi Ajit,
>
> If there will be a new version, I suggest following commit titles, if they
> make
> sense can you update accordingly?
>
>  net/bnxt: update HSI structure
>  net/bnxt: update HWRM prep to use pointer
>  net/bnxt: add TruFlow message handlers
>  net/bnxt: add initial TruFlow core session open
>  net/bnxt: add initial TruFlow core session close
>  net/bnxt: add TruFlow core session SRAM
>  net/bnxt: add initial TruFlow core resource management
>  net/bnxt: add resource manager
>  net/bnxt: add TruFlow core identifier
>  net/bnxt: support TruFlow core TCAM
>  net/bnxt: support TruFlow core table scope
>  net/bnxt: support EM/EEM
>  net/bnxt: fetch SVIF information from firmware
>  net/bnxt: fetch VNIC info
>  net/bnxt: support host memory based TruFlow
>  net/bnxt: support ULP session manager init
>  net/bnxt: support ULP session manager cleanup
>  net/bnxt: add helper functions for blob/regfile ops
>  net/bnxt: support process action tables
>  net/bnxt: support process key tables
>  net/bnxt: support freeing key and action tables
>  net/bnxt: support alloc and program key and act tables
>  net/bnxt: match flow API items with flow template patterns
>  net/bnxt: match flow API actions with flow template actions
>  net/bnxt: support flow API item parsing
>  net/bnxt: support flow API action parsing
>  net/bnxt: support flow API create
>  net/bnxt: support flow API validate
>  net/bnxt: support flow API destroy
>  net/bnxt: support flow API flush
>  net/bnxt: register TruFlow flow API ops
>  net/bnxt: disable vector mode on host based TruFlow
>  net/bnxt: support marking packet
>  net/bnxt: enable meson build on TruFlow
>
Ferruh,
These look ok to me.
Do you want me to respin the set or can you handle it this time?
I am fine with both.

Thanks
Ajit

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-16 17:51         ` Ajit Khaparde
@ 2020-04-17  8:37           ` Ferruh Yigit
  2020-04-17 11:03             ` Ferruh Yigit
  0 siblings, 1 reply; 154+ messages in thread
From: Ferruh Yigit @ 2020-04-17  8:37 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: Venkat Duvvuru, dpdk-dev

On 4/16/2020 6:51 PM, Ajit Khaparde wrote:
> 
> 
> On Thu, Apr 16, 2020 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com
> <mailto:ferruh.yigit@intel.com>> wrote:
> 
>     On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
>     > This patchset introduces a new mechanism to allow host-memory based
>     > flow table management. This should allow higher flow scalability
>     > than what is currently supported. This new approach also defines a
>     > new rte_flow parser, and mapper which currently supports basic packet
>     > classification in receive path. The patchset uses a newly implemented
>     > control-plane firmware interface which optimizes flow insertions and
>     > deletions.
>     >
>     > This is a baseline patchset with limited scale. Follow on patches will
>     > add support for more protocol headers, rte_flow attributes, actions
>     > and such.
>     >
>     > This is a tech preview feature, hence disabled by default and can be enabled
>     > using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
>     >
>     > v3==>v4
>     > =======
>     > 1. Fixed some more compilation issues reported by CI
>     >
>     > Ajit Kumar Khaparde (1):
>     >   net/bnxt: add updated dpdk hsi structure
>     >
>     > Farah Smith (2):
>     >   net/bnxt: add tf core identifier support
>     >   net/bnxt: add tf core table scope support
>     >
>     > Kishore Padmanabha (8):
>     >   net/bnxt: match rte flow items with flow template patterns
>     >   net/bnxt: match rte flow actions with flow template actions
>     >   net/bnxt: add support for rte flow item parsing
>     >   net/bnxt: add support for rte flow action parsing
>     >   net/bnxt: add support for rte flow create driver hook
>     >   net/bnxt: add support for rte flow validate driver hook
>     >   net/bnxt: add support for rte flow destroy driver hook
>     >   net/bnxt: add support for rte flow flush driver hook
>     >
>     > Michael Wildt (4):
>     >   net/bnxt: add initial tf core session open
>     >   net/bnxt: add initial tf core session close support
>     >   net/bnxt: add tf core session sram functions
>     >   net/bnxt: add resource manager functionality
>     >
>     > Mike Baucom (5):
>     >   net/bnxt: add helper functions for blob/regfile ops
>     >   net/bnxt: add support to process action tables
>     >   net/bnxt: add support to process key tables
>     >   net/bnxt: add support to free key and action tables
>     >   net/bnxt: add support to alloc and program key and act tbls
>     >
>     > Pete Spreadborough (2):
>     >   net/bnxt: add truflow message handlers
>     >   net/bnxt: add EM/EEM functionality
>     >
>     > Randy Schacher (1):
>     >   net/bnxt: update hwrm prep to use ptr
>     >
>     > Shahaji Bhosle (2):
>     >   net/bnxt: add initial tf core resource mgmt support
>     >   net/bnxt: add tf core TCAM support
>     >
>     > Venkat Duvvuru (9):
>     >   net/bnxt: fetch SVIF information from the firmware
>     >   net/bnxt: fetch vnic info from DPDK port
>     >   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
>     >   net/bnxt: add support for ULP session manager init
>     >   net/bnxt: add support for ULP session manager cleanup
>     >   net/bnxt: register tf rte flow ops
>     >   net/bnxt: disable vector mode when host based TRUFLOW is enabled
>     >   net/bnxt: add support for injecting mark into packet’s mbuf
>     >   net/bnxt: enable meson build on truflow code
>     >
> 
>     Hi Ajit,
> 
>     If there will be a new version, I suggest following commit titles, if they make
>     sense can you update accordingly?
> 
>      net/bnxt: update HSI structure
>      net/bnxt: update HWRM prep to use pointer
>      net/bnxt: add TruFlow message handlers
>      net/bnxt: add initial TruFlow core session open
>      net/bnxt: add initial TruFlow core session close
>      net/bnxt: add TruFlow core session SRAM
>      net/bnxt: add initial TruFlow core resource management
>      net/bnxt: add resource manager
>      net/bnxt: add TruFlow core identifier
>      net/bnxt: support TruFlow core TCAM
>      net/bnxt: support TruFlow core table scope
>      net/bnxt: support EM/EEM
>      net/bnxt: fetch SVIF information from firmware
>      net/bnxt: fetch VNIC info
>      net/bnxt: support host memory based TruFlow
>      net/bnxt: support ULP session manager init
>      net/bnxt: support ULP session manager cleanup
>      net/bnxt: add helper functions for blob/regfile ops
>      net/bnxt: support process action tables
>      net/bnxt: support process key tables
>      net/bnxt: support freeing key and action tables
>      net/bnxt: support alloc and program key and act tables
>      net/bnxt: match flow API items with flow template patterns
>      net/bnxt: match flow API actions with flow template actions
>      net/bnxt: support flow API item parsing
>      net/bnxt: support flow API action parsing
>      net/bnxt: support flow API create
>      net/bnxt: support flow API validate
>      net/bnxt: support flow API destroy
>      net/bnxt: support flow API flush
>      net/bnxt: register TruFlow flow API ops
>      net/bnxt: disable vector mode on host based TruFlow
>      net/bnxt: support marking packet
>      net/bnxt: enable meson build on TruFlow
> 
> Ferruh,
> These look ok to me.
> Do you want me to respin the set or can you handle it this time?
> I am fine with both.

Hi Ajit,

Let me check if I can fix the build errors quickly, if so it is easier for me to
handle myself. If I can't fix them I will ping you.

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-17  8:37           ` Ferruh Yigit
@ 2020-04-17 11:03             ` Ferruh Yigit
  2020-04-17 16:14               ` Ajit Khaparde
  0 siblings, 1 reply; 154+ messages in thread
From: Ferruh Yigit @ 2020-04-17 11:03 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: Venkat Duvvuru, dpdk-dev

On 4/17/2020 9:37 AM, Ferruh Yigit wrote:
> On 4/16/2020 6:51 PM, Ajit Khaparde wrote:
>>
>>
>> On Thu, Apr 16, 2020 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com
>> <mailto:ferruh.yigit@intel.com>> wrote:
>>
>>     On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
>>     > This patchset introduces a new mechanism to allow host-memory based
>>     > flow table management. This should allow higher flow scalability
>>     > than what is currently supported. This new approach also defines a
>>     > new rte_flow parser, and mapper which currently supports basic packet
>>     > classification in receive path. The patchset uses a newly implemented
>>     > control-plane firmware interface which optimizes flow insertions and
>>     > deletions.
>>     >
>>     > This is a baseline patchset with limited scale. Follow on patches will
>>     > add support for more protocol headers, rte_flow attributes, actions
>>     > and such.
>>     >
>>     > This is a tech preview feature, hence disabled by default and can be enabled
>>     > using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
>>     >
>>     > v3==>v4
>>     > =======
>>     > 1. Fixed some more compilation issues reported by CI
>>     >
>>     > Ajit Kumar Khaparde (1):
>>     >   net/bnxt: add updated dpdk hsi structure
>>     >
>>     > Farah Smith (2):
>>     >   net/bnxt: add tf core identifier support
>>     >   net/bnxt: add tf core table scope support
>>     >
>>     > Kishore Padmanabha (8):
>>     >   net/bnxt: match rte flow items with flow template patterns
>>     >   net/bnxt: match rte flow actions with flow template actions
>>     >   net/bnxt: add support for rte flow item parsing
>>     >   net/bnxt: add support for rte flow action parsing
>>     >   net/bnxt: add support for rte flow create driver hook
>>     >   net/bnxt: add support for rte flow validate driver hook
>>     >   net/bnxt: add support for rte flow destroy driver hook
>>     >   net/bnxt: add support for rte flow flush driver hook
>>     >
>>     > Michael Wildt (4):
>>     >   net/bnxt: add initial tf core session open
>>     >   net/bnxt: add initial tf core session close support
>>     >   net/bnxt: add tf core session sram functions
>>     >   net/bnxt: add resource manager functionality
>>     >
>>     > Mike Baucom (5):
>>     >   net/bnxt: add helper functions for blob/regfile ops
>>     >   net/bnxt: add support to process action tables
>>     >   net/bnxt: add support to process key tables
>>     >   net/bnxt: add support to free key and action tables
>>     >   net/bnxt: add support to alloc and program key and act tbls
>>     >
>>     > Pete Spreadborough (2):
>>     >   net/bnxt: add truflow message handlers
>>     >   net/bnxt: add EM/EEM functionality
>>     >
>>     > Randy Schacher (1):
>>     >   net/bnxt: update hwrm prep to use ptr
>>     >
>>     > Shahaji Bhosle (2):
>>     >   net/bnxt: add initial tf core resource mgmt support
>>     >   net/bnxt: add tf core TCAM support
>>     >
>>     > Venkat Duvvuru (9):
>>     >   net/bnxt: fetch SVIF information from the firmware
>>     >   net/bnxt: fetch vnic info from DPDK port
>>     >   net/bnxt: add devargs parameter for host memory based TRUFLOW feature
>>     >   net/bnxt: add support for ULP session manager init
>>     >   net/bnxt: add support for ULP session manager cleanup
>>     >   net/bnxt: register tf rte flow ops
>>     >   net/bnxt: disable vector mode when host based TRUFLOW is enabled
>>     >   net/bnxt: add support for injecting mark into packet’s mbuf
>>     >   net/bnxt: enable meson build on truflow code
>>     >
>>
>>     Hi Ajit,
>>
>>     If there will be a new version, I suggest following commit titles, if they make
>>     sense can you update accordingly?
>>
>>      net/bnxt: update HSI structure
>>      net/bnxt: update HWRM prep to use pointer
>>      net/bnxt: add TruFlow message handlers
>>      net/bnxt: add initial TruFlow core session open
>>      net/bnxt: add initial TruFlow core session close
>>      net/bnxt: add TruFlow core session SRAM
>>      net/bnxt: add initial TruFlow core resource management
>>      net/bnxt: add resource manager
>>      net/bnxt: add TruFlow core identifier
>>      net/bnxt: support TruFlow core TCAM
>>      net/bnxt: support TruFlow core table scope
>>      net/bnxt: support EM/EEM
>>      net/bnxt: fetch SVIF information from firmware
>>      net/bnxt: fetch VNIC info
>>      net/bnxt: support host memory based TruFlow
>>      net/bnxt: support ULP session manager init
>>      net/bnxt: support ULP session manager cleanup
>>      net/bnxt: add helper functions for blob/regfile ops
>>      net/bnxt: support process action tables
>>      net/bnxt: support process key tables
>>      net/bnxt: support freeing key and action tables
>>      net/bnxt: support alloc and program key and act tables
>>      net/bnxt: match flow API items with flow template patterns
>>      net/bnxt: match flow API actions with flow template actions
>>      net/bnxt: support flow API item parsing
>>      net/bnxt: support flow API action parsing
>>      net/bnxt: support flow API create
>>      net/bnxt: support flow API validate
>>      net/bnxt: support flow API destroy
>>      net/bnxt: support flow API flush
>>      net/bnxt: register TruFlow flow API ops
>>      net/bnxt: disable vector mode on host based TruFlow
>>      net/bnxt: support marking packet
>>      net/bnxt: enable meson build on TruFlow
>>
>> Ferruh,
>> These look ok to me.
>> Do you want me to respin the set or can you handle it this time?
>> I am fine with both.
> 
> Hi Ajit,
> 
> Let me check if I can fix the build errors quickly, if so it is easier for me to
> handle myself. If I can't fix them I will ping you.
> 

Fixed and pulled.

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management
  2020-04-17 11:03             ` Ferruh Yigit
@ 2020-04-17 16:14               ` Ajit Khaparde
  0 siblings, 0 replies; 154+ messages in thread
From: Ajit Khaparde @ 2020-04-17 16:14 UTC (permalink / raw)
  To: Ferruh Yigit; +Cc: Venkat Duvvuru, dpdk-dev

On Fri, Apr 17, 2020 at 4:03 AM Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 4/17/2020 9:37 AM, Ferruh Yigit wrote:
> > On 4/16/2020 6:51 PM, Ajit Khaparde wrote:
> >>
> >>
> >> On Thu, Apr 16, 2020 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com
> >> <mailto:ferruh.yigit@intel.com>> wrote:
> >>
> >>     On 4/15/2020 9:18 AM, Venkat Duvvuru wrote:
> >>     > This patchset introduces a new mechanism to allow host-memory
> based
> >>     > flow table management. This should allow higher flow scalability
> >>     > than what is currently supported. This new approach also defines a
> >>     > new rte_flow parser, and mapper which currently supports basic
> packet
> >>     > classification in receive path. The patchset uses a newly
> implemented
> >>     > control-plane firmware interface which optimizes flow insertions
> and
> >>     > deletions.
> >>     >
> >>     > This is a baseline patchset with limited scale. Follow on patches
> will
> >>     > add support for more protocol headers, rte_flow attributes,
> actions
> >>     > and such.
> >>     >
> >>     > This is a tech preview feature, hence disabled by default and can
> be enabled
> >>     > using bnxt devargs. For ex: "-w
> 0000:0d:00.0,host-based-truflow=1”.
> >>     >
> >>     > v3==>v4
> >>     > =======
> >>     > 1. Fixed some more compilation issues reported by CI
> >>     >
> >>     > Ajit Kumar Khaparde (1):
> >>     >   net/bnxt: add updated dpdk hsi structure
> >>     >
> >>     > Farah Smith (2):
> >>     >   net/bnxt: add tf core identifier support
> >>     >   net/bnxt: add tf core table scope support
> >>     >
> >>     > Kishore Padmanabha (8):
> >>     >   net/bnxt: match rte flow items with flow template patterns
> >>     >   net/bnxt: match rte flow actions with flow template actions
> >>     >   net/bnxt: add support for rte flow item parsing
> >>     >   net/bnxt: add support for rte flow action parsing
> >>     >   net/bnxt: add support for rte flow create driver hook
> >>     >   net/bnxt: add support for rte flow validate driver hook
> >>     >   net/bnxt: add support for rte flow destroy driver hook
> >>     >   net/bnxt: add support for rte flow flush driver hook
> >>     >
> >>     > Michael Wildt (4):
> >>     >   net/bnxt: add initial tf core session open
> >>     >   net/bnxt: add initial tf core session close support
> >>     >   net/bnxt: add tf core session sram functions
> >>     >   net/bnxt: add resource manager functionality
> >>     >
> >>     > Mike Baucom (5):
> >>     >   net/bnxt: add helper functions for blob/regfile ops
> >>     >   net/bnxt: add support to process action tables
> >>     >   net/bnxt: add support to process key tables
> >>     >   net/bnxt: add support to free key and action tables
> >>     >   net/bnxt: add support to alloc and program key and act tbls
> >>     >
> >>     > Pete Spreadborough (2):
> >>     >   net/bnxt: add truflow message handlers
> >>     >   net/bnxt: add EM/EEM functionality
> >>     >
> >>     > Randy Schacher (1):
> >>     >   net/bnxt: update hwrm prep to use ptr
> >>     >
> >>     > Shahaji Bhosle (2):
> >>     >   net/bnxt: add initial tf core resource mgmt support
> >>     >   net/bnxt: add tf core TCAM support
> >>     >
> >>     > Venkat Duvvuru (9):
> >>     >   net/bnxt: fetch SVIF information from the firmware
> >>     >   net/bnxt: fetch vnic info from DPDK port
> >>     >   net/bnxt: add devargs parameter for host memory based TRUFLOW
> feature
> >>     >   net/bnxt: add support for ULP session manager init
> >>     >   net/bnxt: add support for ULP session manager cleanup
> >>     >   net/bnxt: register tf rte flow ops
> >>     >   net/bnxt: disable vector mode when host based TRUFLOW is enabled
> >>     >   net/bnxt: add support for injecting mark into packet’s mbuf
> >>     >   net/bnxt: enable meson build on truflow code
> >>     >
> >>
> >>     Hi Ajit,
> >>
> >>     If there will be a new version, I suggest following commit titles,
> if they make
> >>     sense can you update accordingly?
> >>
> >>      net/bnxt: update HSI structure
> >>      net/bnxt: update HWRM prep to use pointer
> >>      net/bnxt: add TruFlow message handlers
> >>      net/bnxt: add initial TruFlow core session open
> >>      net/bnxt: add initial TruFlow core session close
> >>      net/bnxt: add TruFlow core session SRAM
> >>      net/bnxt: add initial TruFlow core resource management
> >>      net/bnxt: add resource manager
> >>      net/bnxt: add TruFlow core identifier
> >>      net/bnxt: support TruFlow core TCAM
> >>      net/bnxt: support TruFlow core table scope
> >>      net/bnxt: support EM/EEM
> >>      net/bnxt: fetch SVIF information from firmware
> >>      net/bnxt: fetch VNIC info
> >>      net/bnxt: support host memory based TruFlow
> >>      net/bnxt: support ULP session manager init
> >>      net/bnxt: support ULP session manager cleanup
> >>      net/bnxt: add helper functions for blob/regfile ops
> >>      net/bnxt: support process action tables
> >>      net/bnxt: support process key tables
> >>      net/bnxt: support freeing key and action tables
> >>      net/bnxt: support alloc and program key and act tables
> >>      net/bnxt: match flow API items with flow template patterns
> >>      net/bnxt: match flow API actions with flow template actions
> >>      net/bnxt: support flow API item parsing
> >>      net/bnxt: support flow API action parsing
> >>      net/bnxt: support flow API create
> >>      net/bnxt: support flow API validate
> >>      net/bnxt: support flow API destroy
> >>      net/bnxt: support flow API flush
> >>      net/bnxt: register TruFlow flow API ops
> >>      net/bnxt: disable vector mode on host based TruFlow
> >>      net/bnxt: support marking packet
> >>      net/bnxt: enable meson build on TruFlow
> >>
> >> Ferruh,
> >> These look ok to me.
> >> Do you want me to respin the set or can you handle it this time?
> >> I am fine with both.
> >
> > Hi Ajit,
> >
> > Let me check if I can fix the build errors quickly, if so it is easier
> for me to
> > handle myself. If I can't fix them I will ping you.
> >
>
> Fixed and pulled.
>
Thanks Ferruh.

^ permalink raw reply	[flat|nested] 154+ messages in thread

* Re: [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code
  2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
@ 2020-04-22 21:27         ` Thomas Monjalon
  0 siblings, 0 replies; 154+ messages in thread
From: Thomas Monjalon @ 2020-04-22 21:27 UTC (permalink / raw)
  To: Venkat Duvvuru; +Cc: dev, ajit.khaparde

15/04/2020 10:19, Venkat Duvvuru:
> Include tf_ulp & tf_core directories and the files inside them.

This commit is non-sense.
You should compile files while adding them.
It is not a lot more expensive to prepare proper patches.




^ permalink raw reply	[flat|nested] 154+ messages in thread

end of thread, other threads:[~2020-04-22 21:27 UTC | newest]

Thread overview: 154+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
2020-03-17 15:37 ` [dpdk-dev] [PATCH 01/33] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 02/33] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 03/33] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 04/33] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 05/33] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 06/33] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 07/33] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 08/33] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 09/33] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 10/33] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 11/33] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 12/33] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 13/33] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 14/33] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 15/33] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 16/33] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 17/33] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 18/33] net/bnxt: add support to process action tables Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 19/33] net/bnxt: add support to process key tables Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 20/33] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 21/33] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 22/33] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 23/33] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 24/33] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 25/33] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 26/33] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 27/33] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 28/33] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 29/33] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 30/33] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 31/33] net/bnxt: disable vector mode when BNXT TRUFLOW is enabled Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 32/33] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 33/33] config: introduce BNXT TRUFLOW config flag Venkat Duvvuru
2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
2020-04-13 21:35   ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Thomas Monjalon
2020-04-15  8:56     ` Venkat Duvvuru
2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-04-16 17:39         ` Ferruh Yigit
2020-04-16 17:47           ` Ajit Khaparde
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-04-16 17:39         ` Ferruh Yigit
2020-04-16 17:48           ` Ajit Khaparde
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
2020-04-22 21:27         ` Thomas Monjalon
2020-04-15 15:29       ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Ajit Khaparde
2020-04-16 16:23       ` Ferruh Yigit
2020-04-16 16:38         ` Ajit Khaparde
2020-04-16 17:40       ` Ferruh Yigit
2020-04-16 17:51         ` Ajit Khaparde
2020-04-17  8:37           ` Ferruh Yigit
2020-04-17 11:03             ` Ferruh Yigit
2020-04-17 16:14               ` Ajit Khaparde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).