DPDK patches and discussions
 help / color / mirror / Atom feed
From: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
To: dev@dpdk.org
Cc: Pete Spreadborough <peter.spreadborough@broadcom.com>
Subject: [dpdk-dev] [PATCH 12/33] net/bnxt: add EM/EEM functionality
Date: Tue, 17 Mar 2020 21:08:10 +0530	[thread overview]
Message-ID: <1584459511-5353-13-git-send-email-venkatkumar.duvvuru@broadcom.com> (raw)
In-Reply-To: <1584459511-5353-1-git-send-email-venkatkumar.duvvuru@broadcom.com>

From: Pete Spreadborough <peter.spreadborough@broadcom.com>

- Add TruFlow flow memory support
- Exact Match (EM) adds the capability to manage and manipulate
  data flows using on chip memory.
- Extended Exact Match (EEM) behaves similarly to EM, but at a
  vastly increased scale by using host DDR, with performance
  tradeoff due to the need to access off-chip memory.

Signed-off-by: Pete Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/Makefile                     |    2 +
 drivers/net/bnxt/tf_core/lookup3.h            |  161 +++
 drivers/net/bnxt/tf_core/stack.c              |  107 ++
 drivers/net/bnxt/tf_core/stack.h              |  107 ++
 drivers/net/bnxt/tf_core/tf_core.c            |   51 +
 drivers/net/bnxt/tf_core/tf_core.h            |  480 ++++++-
 drivers/net/bnxt/tf_core/tf_em.c              |  516 +++++++
 drivers/net/bnxt/tf_core/tf_em.h              |  117 ++
 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h |  166 +++
 drivers/net/bnxt/tf_core/tf_msg.c             |  171 +++
 drivers/net/bnxt/tf_core/tf_msg.h             |   40 +
 drivers/net/bnxt/tf_core/tf_tbl.c             | 1795 ++++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_tbl.h             |   83 ++
 13 files changed, 3789 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/bnxt/tf_core/lookup3.h
 create mode 100644 drivers/net/bnxt/tf_core/stack.c
 create mode 100644 drivers/net/bnxt/tf_core/stack.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.c
 create mode 100644 drivers/net/bnxt/tf_core/tf_em.h
 create mode 100644 drivers/net/bnxt/tf_core/tf_ext_flow_handle.h

diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile
index b97abb6..c950c6d 100644
--- a/drivers/net/bnxt/Makefile
+++ b/drivers/net/bnxt/Makefile
@@ -51,6 +51,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_core.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/bitalloc.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_msg.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/rand.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_em.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_rm.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tf_tbl.c
 SRCS-$(CONFIG_RTE_LIBRTE_BNXT_TRUFLOW) += tf_core/tfp.c
diff --git a/drivers/net/bnxt/tf_core/lookup3.h b/drivers/net/bnxt/tf_core/lookup3.h
new file mode 100644
index 0000000..b1fd2cd
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/lookup3.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ *
+ * Based on lookup3.c, by Bob Jenkins, May 2006, Public Domain.
+ * http://www.burtleburtle.net/bob/c/lookup3.c
+ *
+ * These functions for producing 32-bit hashes for has table lookup.
+ * hashword(), hashlittle(), hashlittle2(), hashbig(), mix(), and final()
+ * are externally useful functions. Routines to test the hash are included
+ * if SELF_TEST is defined. You can use this free for any purpose. It is in
+ * the public domain. It has no warranty.
+ */
+
+#ifndef _LOOKUP3_H_
+#define _LOOKUP3_H_
+
+#define rot(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
+
+/** -------------------------------------------------------------------------
+ * This is reversible, so any information in (a,b,c) before mix() is
+ * still in (a,b,c) after mix().
+ *
+ * If four pairs of (a,b,c) inputs are run through mix(), or through
+ * mix() in reverse, there are at least 32 bits of the output that
+ * are sometimes the same for one pair and different for another pair.
+ * This was tested for:
+ *   pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * Some k values for my "a-=c; a^=rot(c,k); c+=b;" arrangement that
+ * satisfy this are
+ *     4  6  8 16 19  4
+ *     9 15  3 18 27 15
+ *    14  9  3  7 17  3
+ * Well, "9 15 3 18 27 15" didn't quite get 32 bits diffing
+ * for "differ" defined as + with a one-bit base and a two-bit delta.  I
+ * used http://burtleburtle.net/bob/hash/avalanche.html to choose
+ * the operations, constants, and arrangements of the variables.
+ *
+ * This does not achieve avalanche.  There are input bits of (a,b,c)
+ * that fail to affect some output bits of (a,b,c), especially of a.  The
+ * most thoroughly mixed value is c, but it doesn't really even achieve
+ * avalanche in c.
+ *
+ * This allows some parallelism.  Read-after-writes are good at doubling
+ * the number of bits affected, so the goal of mixing pulls in the opposite
+ * direction as the goal of parallelism.  I did what I could.  Rotates
+ * seem to cost as much as shifts on every machine I could lay my hands
+ * on, and rotates are much kinder to the top and bottom bits, so I used
+ * rotates.
+ * --------------------------------------------------------------------------
+ */
+#define mix(a, b, c) \
+{ \
+	(a) -= (c); (a) ^= rot((c), 4);  (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 6);  (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 8);  (b) += a; \
+	(a) -= (c); (a) ^= rot((c), 16); (c) += b; \
+	(b) -= (a); (b) ^= rot((a), 19); (a) += c; \
+	(c) -= (b); (c) ^= rot((b), 4);  (b) += a; \
+}
+
+/** --------------------------------------------------------------------------
+ * final -- final mixing of 3 32-bit values (a,b,c) into c
+ *
+ * Pairs of (a,b,c) values differing in only a few bits will usually
+ * produce values of c that look totally different.  This was tested for
+ *  pairs that differed by one bit, by two bits, in any combination
+ *   of top bits of (a,b,c), or in any combination of bottom bits of
+ *   (a,b,c).
+ *   "differ" is defined as +, -, ^, or ~^.  For + and -, I transformed
+ *   the output delta to a Gray code (a^(a>>1)) so a string of 1's (as
+ *   is commonly produced by subtraction) look like a single 1-bit
+ *   difference.
+ *   the base values were pseudorandom, all zero but one bit set, or
+ *   all zero plus a counter that starts at zero.
+ *
+ * These constants passed:
+ *  14 11 25 16 4 14 24
+ *  12 14 25 16 4 14 24
+ * and these came close:
+ *   4  8 15 26 3 22 24
+ *  10  8 15 26 3 22 24
+ *  11  8 15 26 3 22 24
+ * --------------------------------------------------------------------------
+ */
+#define final(a, b, c) \
+{ \
+	(c) ^= (b); (c) -= rot((b), 14); \
+	(a) ^= (c); (a) -= rot((c), 11); \
+	(b) ^= (a); (b) -= rot((a), 25); \
+	(c) ^= (b); (c) -= rot((b), 16); \
+	(a) ^= (c); (a) -= rot((c), 4);  \
+	(b) ^= (a); (b) -= rot((a), 14); \
+	(c) ^= (b); (c) -= rot((b), 24); \
+}
+
+/** --------------------------------------------------------------------
+ *  This works on all machines.  To be useful, it requires
+ *  -- that the key be an array of uint32_t's, and
+ *  -- that the length be the number of uint32_t's in the key
+
+ *  The function hashword() is identical to hashlittle() on little-endian
+ *  machines, and identical to hashbig() on big-endian machines,
+ *  except that the length has to be measured in uint32_ts rather than in
+ *  bytes. hashlittle() is more complicated than hashword() only because
+ *  hashlittle() has to dance around fitting the key bytes into registers.
+ *
+ *  Input Parameters:
+ *	 key: an array of uint32_t values
+ *	 length: the length of the key, in uint32_ts
+ *	 initval: the previous hash, or an arbitrary value
+ * --------------------------------------------------------------------
+ */
+static inline uint32_t hashword(const uint32_t *k,
+				size_t length,
+				uint32_t initval) {
+	uint32_t a, b, c;
+	int index = 12;
+
+	/* Set up the internal state */
+	a = 0xdeadbeef + (((uint32_t)length) << 2) + initval;
+	b = a;
+	c = a;
+
+	/*-------------------------------------------- handle most of the key */
+	while (length > 3) {
+		a += k[index];
+		b += k[index - 1];
+		c += k[index - 2];
+		mix(a, b, c);
+		length -= 3;
+		index -= 3;
+	}
+
+	/*-------------------------------------- handle the last 3 uint32_t's */
+	switch (length) {	      /* all the case statements fall through */
+	case 3:
+		c += k[index - 2];
+		/* Falls through. */
+	case 2:
+		b += k[index - 1];
+		/* Falls through. */
+	case 1:
+		a += k[index];
+		final(a, b, c);
+		/* Falls through. */
+	case 0:	    /* case 0: nothing left to add */
+		break;
+	}
+	/*------------------------------------------------- report the result */
+	return c;
+}
+
+#endif /* _LOOKUP3_H_ */
diff --git a/drivers/net/bnxt/tf_core/stack.c b/drivers/net/bnxt/tf_core/stack.c
new file mode 100644
index 0000000..3337073
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.c
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+#include <errno.h>
+#include "stack.h"
+
+#define STACK_EMPTY -1
+
+/* Initialize stack
+ */
+int
+stack_init(int num_entries, uint32_t *items, struct stack *st)
+{
+	if (items == NULL || st == NULL)
+		return -EINVAL;
+
+	st->max = num_entries;
+	st->top = STACK_EMPTY;
+	st->items = items;
+
+	return 0;
+}
+
+/* Return the size of the stack
+ */
+int32_t
+stack_size(struct stack *st)
+{
+	return st->top + 1;
+}
+
+/* Check if the stack is empty
+ */
+bool
+stack_is_empty(struct stack *st)
+{
+	return st->top == STACK_EMPTY;
+}
+
+/* Check if the stack is full
+ */
+bool
+stack_is_full(struct stack *st)
+{
+	return st->top == st->max - 1;
+}
+
+/* Add  element x to  the stack
+ */
+int
+stack_push(struct stack *st, uint32_t x)
+{
+	if (stack_is_full(st))
+		return -EOVERFLOW;
+
+	/* add an element and increments the top index
+	 */
+	st->items[++st->top] = x;
+
+	return 0;
+}
+
+/* Pop top element x from the stack and return
+ * in user provided location.
+ */
+int
+stack_pop(struct stack *st, uint32_t *x)
+{
+	if (stack_is_empty(st))
+		return -ENODATA;
+
+	*x = st->items[st->top];
+	st->top--;
+
+	return 0;
+}
+
+/* Dump the stack
+ */
+void stack_dump(struct stack *st)
+{
+	int i, j;
+
+	printf("top=%d\n", st->top);
+	printf("max=%d\n", st->max);
+
+	if (st->top == -1) {
+		printf("stack is empty\n");
+		return;
+	}
+
+	for (i = 0; i < st->max + 7 / 8; i++) {
+		printf("item[%d] 0x%08x", i, st->items[i]);
+
+		for (j = 0; j < 7; j++) {
+			if (i++ < st->max - 1)
+				printf(" 0x%08x", st->items[i]);
+		}
+		printf("\n");
+	}
+}
diff --git a/drivers/net/bnxt/tf_core/stack.h b/drivers/net/bnxt/tf_core/stack.h
new file mode 100644
index 0000000..6fe8829
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/stack.h
@@ -0,0 +1,107 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+#ifndef _STACK_H_
+#define _STACK_H_
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+/** Stack data structure
+ */
+struct stack {
+	int max;         /**< Maximum number of entries */
+	int top;         /**< maximum value in stack */
+	uint32_t *items; /**< items in the stack */
+};
+
+/** Initialize stack of uint32_t elements
+ *
+ *  [in] num_entries
+ *    maximum number of elemnts in the stack
+ *
+ *  [in] items
+ *    pointer to items (must be sized to (uint32_t * num_entries)
+ *
+ *  s[in] st
+ *    pointer to the stack structure
+ *
+ *  return
+ *    0 for success
+ */
+int stack_init(int num_entries,
+	       uint32_t *items,
+	       struct stack *st);
+
+/** Return the size of the stack
+ *
+ *  [in] st
+ *    pointer to the stack
+ *
+ *  return
+ *    number of elements
+ */
+int32_t stack_size(struct stack *st);
+
+/** Check if the stack is empty
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_empty(struct stack *st);
+
+/** Check if the stack is full
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *   true or false
+ */
+bool stack_is_full(struct stack *st);
+
+/** Add  element x to  the stack
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in] x
+ *   value to push on the stack
+ * return
+ *  0 for success
+ */
+int stack_push(struct stack *st, uint32_t x);
+
+/** Pop top element x from the stack and return
+ * in user provided location.
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * [in, out] x
+ *  pointer to where the value popped will be written
+ *
+ * return
+ *  0 for success
+ */
+int stack_pop(struct stack *st, uint32_t *x);
+
+/** Dump stack information
+ *
+ * Warning: Don't use for large stacks due to prints
+ *
+ * [in] st
+ *   pointer to the stack
+ *
+ * return
+ *    none
+ */
+void stack_dump(struct stack *st);
+
+#endif /* _STACK_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index 2833de2..8f037a2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -8,6 +8,7 @@
 #include "tf_core.h"
 #include "tf_session.h"
 #include "tf_tbl.h"
+#include "tf_em.h"
 #include "tf_rm.h"
 #include "tf_msg.h"
 #include "tfp.h"
@@ -288,6 +289,56 @@ tf_close_session(struct tf *tfp)
 	return rc_close;
 }
 
+/** insert EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Process the EM entry per Table Scope type */
+	return tf_insert_eem_entry(
+		(struct tf_session *)(tfp->session->core_data),
+		tbl_scope_cb,
+		parms);
+}
+
+/** Delete EM hash entry API
+ *
+ *    returns:
+ *    0       - Success
+ *    -EINVAL - Error
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_tbl_scope_cb     *tbl_scope_cb;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(
+		(struct tf_session *)(tfp->session->core_data),
+		parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	return tf_delete_eem_entry(tfp, parms);
+}
+
 /** allocate identifier resource
  *
  * Returns success or failure code.
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 4c90677..34e643c 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -21,6 +21,10 @@
 
 /********** BEGIN Truflow Core DEFINITIONS **********/
 
+
+#define TF_KILOBYTE  1024
+#define TF_MEGABYTE  (1024 * 1024)
+
 /**
  * direction
  */
@@ -31,6 +35,27 @@ enum tf_dir {
 };
 
 /**
+ * memory choice
+ */
+enum tf_mem {
+	TF_MEM_INTERNAL, /**< Internal */
+	TF_MEM_EXTERNAL, /**< External */
+	TF_MEM_MAX
+};
+
+/**
+ * The size of the external action record (Wh+/Brd2)
+ *
+ * Currently set to 512.
+ *
+ * AR (16B) + encap (256B) + stats_ptrs (8) + resvd (8)
+ * + stats (16) = 304 aligned on a 16B boundary
+ *
+ * Theoretically, the size should be smaller. ~304B
+ */
+#define TF_ACTION_RECORD_SZ 512
+
+/**
  * External pool size
  *
  * Defines a single pool of external action records of
@@ -56,6 +81,23 @@ enum tf_dir {
 #define TF_EXT_POOL_0      0 /**< matches TF_TBL_TYPE_EXT   */
 #define TF_EXT_POOL_1      1 /**< matches TF_TBL_TYPE_EXT_0 */
 
+/** EEM record AR helper
+ *
+ * Helpers to handle the Action Record Pointer in the EEM Record Entry.
+ *
+ * Convert absolute offset to action record pointer in EEM record entry
+ * Convert action record pointer in EEM record entry to absolute offset
+ */
+#define TF_ACT_REC_OFFSET_2_PTR(offset) ((offset) >> 4)
+#define TF_ACT_REC_PTR_2_OFFSET(offset) ((offset) << 4)
+
+#define TF_ACT_REC_INDEX_2_OFFSET(idx) ((idx) << 9)
+
+/*
+ * Helper Macros
+ */
+#define TF_BITS_2_BYTES(num_bits) (((num_bits) + 7) / 8)
+
 /********** BEGIN API FUNCTION PROTOTYPES/PARAMETERS **********/
 
 /**
@@ -495,7 +537,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t rx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t rx_tbl_if_id;
 	/**
@@ -517,7 +559,7 @@ struct tf_alloc_tbl_scope_parms {
 	 */
 	uint32_t tx_num_flows_in_k;
 	/**
-	 * [in] SR2 only receive table access interface id
+	 * [in] Brd4 only receive table access interface id
 	 */
 	uint32_t tx_tbl_if_id;
 	/**
@@ -536,7 +578,7 @@ struct tf_free_tbl_scope_parms {
 /**
  * allocate a table scope
  *
- * On SR2 Firmware will allocate a scope ID.  On other devices, the scope
+ * On Brd4 Firmware will allocate a scope ID.  On other devices, the scope
  * is a software construct to identify an EEM table.  This function will
  * divide the hash memory/buckets and records according to the device
  * device constraints based upon calculations using either the number of flows
@@ -546,7 +588,7 @@ struct tf_free_tbl_scope_parms {
  *
  * This API will allocate the table region in
  * DRAM, program the PTU page table entries, and program the number of static
- * buckets (if SR2) in the RX and TX CFAs.  Buckets are assumed to start at
+ * buckets (if Brd4) in the RX and TX CFAs.  Buckets are assumed to start at
  * 0 in the EM memory for the scope.  Upon successful completion of this API,
  * hash tables are fully initialized and ready for entries to be inserted.
  *
@@ -563,7 +605,7 @@ struct tf_free_tbl_scope_parms {
  * memory allocated based on the rx_em_hash_mb/tx_em_hash_mb parameters.  The
  * hash table buckets are stored at the beginning of that memory.
  *
- * NOTES:  No EM internal setup is done here. On chip EM records are managed
+ * NOTE:  No EM internal setup is done here. On chip EM records are managed
  * internally by TruFlow core.
  *
  * Returns success or failure code.
@@ -577,7 +619,7 @@ int tf_alloc_tbl_scope(struct tf *tfp,
  *
  * Firmware checks that the table scope ID is owned by the TruFlow
  * session, verifies that no references to this table scope remains
- * (SR2 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
+ * (Brd4 ILT) or Profile TCAM entries for either CFA (RX/TX) direction,
  * then frees the table scope ID.
  *
  * Returns success or failure code.
@@ -905,4 +947,430 @@ enum tf_tbl_type {
 	TF_TBL_TYPE_EXT_0,
 	TF_TBL_TYPE_MAX
 };
+
+/** tf_alloc_tbl_entry parameter definition
+ */
+struct tf_alloc_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Enable search for matching entry. If the table type is
+	 * internal the shadow copy will be searched before
+	 * alloc. Session must be configured with shadow copy enabled.
+	 */
+	uint8_t search_enable;
+	/**
+	 * [in] Result data to search for (if search_enable)
+	 */
+	uint8_t *result;
+	/**
+	 * [in] Result data size in bytes (if search_enable)
+	 */
+	uint16_t result_sz_in_bytes;
+	/**
+	 * [out] If search_enable, set if matching entry found
+	 */
+	uint8_t hit;
+	/**
+	 * [out] Current ref count after allocation (if search_enable)
+	 */
+	uint16_t ref_cnt;
+	/**
+	 * [out] Idx of allocated entry or found entry (if search_enable)
+	 */
+	uint32_t idx;
+};
+
+/** allocate index table entries
+ *
+ * Internal types:
+ *
+ * Allocate an on chip index table entry or search for a matching
+ * entry of the indicated type for this TruFlow session.
+ *
+ * Allocates an index table record. This function will attempt to
+ * allocate an entry or search an index table for a matching entry if
+ * search is enabled (only the shadow copy of the table is accessed).
+ *
+ * If search is not enabled, the first available free entry is
+ * returned. If search is enabled and a matching entry to entry_data
+ * is found hit is set to TRUE and success is returned.
+ *
+ * External types:
+ *
+ * These are used to allocate inlined action record memory.
+ *
+ * Allocates an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_alloc_tbl_entry(struct tf *tfp,
+		       struct tf_alloc_tbl_entry_parms *parms);
+
+/** tf_free_tbl_entry parameter definition
+ */
+struct tf_free_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of the allocation type
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Index to free
+	 */
+	uint32_t idx;
+	/**
+	 * [out] Reference count after free, only valid if session has been
+	 * created with shadow_copy.
+	 */
+	uint16_t ref_cnt;
+};
+
+/** free index table entry
+ *
+ * Used to free a previously allocated table entry.
+ *
+ * Internal types:
+ *
+ * If session has shadow_copy enabled the shadow DB is searched and if
+ * found the element ref_cnt is decremented. If ref_cnt goes to
+ * zero then the element is returned to the session pool.
+ *
+ * If the session does not have a shadow DB the element is free'ed and
+ * given back to the session pool.
+ *
+ * External types:
+ *
+ * Free's an external index table action record.
+ *
+ * NOTE:
+ * Implementation of the internals of this function will be a stack with push
+ * and pop.
+ *
+ * Returns success or failure code.
+ */
+int tf_free_tbl_entry(struct tf *tfp,
+		      struct tf_free_tbl_entry_parms *parms);
+
+/** tf_set_tbl_entry parameter definition
+ */
+struct tf_set_tbl_entry_parms {
+	/**
+	 * [in] Table scope identifier
+	 *
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to set
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [in] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [in] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to write to
+	 */
+	uint32_t idx;
+};
+
+/** set index table entry
+ *
+ * Used to insert an application programmed index table entry into a
+ * previous allocated table location.  A shadow copy of the table
+ * is maintained (if enabled) (only for internal objects)
+ *
+ * Returns success or failure code.
+ */
+int tf_set_tbl_entry(struct tf *tfp,
+		     struct tf_set_tbl_entry_parms *parms);
+
+/** tf_get_tbl_entry parameter definition
+ */
+struct tf_get_tbl_entry_parms {
+	/**
+	 * [in] Receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] Type of object to get
+	 */
+	enum tf_tbl_type type;
+	/**
+	 * [out] Entry data
+	 */
+	uint8_t *data;
+	/**
+	 * [out] Entry size
+	 */
+	uint16_t data_sz_in_bytes;
+	/**
+	 * [in] Entry index to read
+	 */
+	uint32_t idx;
+};
+
+/** get index table entry
+ *
+ * Used to retrieve a previous set index table entry.
+ *
+ * Reads and compares with the shadow table copy (if enabled) (only
+ * for internal objects).
+ *
+ * Returns success or failure code. Failure will be returned if the
+ * provided data buffer is too small for the data type requested.
+ */
+int tf_get_tbl_entry(struct tf *tfp,
+		     struct tf_get_tbl_entry_parms *parms);
+
+/**
+ * @page exact_match Exact Match Table
+ *
+ * @ref tf_insert_em_entry
+ *
+ * @ref tf_delete_em_entry
+ *
+ * @ref tf_search_em_entry
+ *
+ */
+/** tf_insert_em_entry parameter definition
+ */
+struct tf_insert_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in] ptr to structure containing result field
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] duplicate check flag
+	 */
+	uint8_t	dup_check;
+	/**
+	 * [out] Flow handle value for the inserted entry.  This is encoded
+	 * as the entries[4]:bucket[2]:hashId[1]:hash[14]
+	 */
+	uint64_t flow_handle;
+	/**
+	 * [out] Flow id is returned as null (internal)
+	 * Flow id is the GFID value for the inserted entry (external)
+	 * This is the value written to the BD and useful information for mark.
+	 */
+	uint64_t flow_id;
+};
+/**
+ * tf_delete_em_entry parameter definition
+ */
+struct tf_delete_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] epoch group IDs of entry to delete
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] structure containing flow delete handle information
+	 */
+	uint64_t flow_handle;
+};
+/**
+ * tf_search_em_entry parameter definition
+ */
+struct tf_search_em_entry_parms {
+	/**
+	 * [in] receive or transmit direction
+	 */
+	enum tf_dir dir;
+	/**
+	 * [in] internal or external
+	 */
+	enum tf_mem mem;
+	/**
+	 * [in] ID of table scope to use (external only)
+	 */
+	uint32_t tbl_scope_id;
+	/**
+	 * [in] ID of table interface to use (Brd4 only)
+	 */
+	uint32_t tbl_if_id;
+	/**
+	 * [in] ptr to structure containing key fields
+	 */
+	uint8_t *key;
+	/**
+	 * [in] key bit length
+	 */
+	uint16_t key_sz_in_bits;
+	/**
+	 * [in/out] ptr to structure containing EM record fields
+	 */
+	uint8_t *em_record;
+	/**
+	 * [out] result size in bits
+	 */
+	uint16_t em_record_sz_in_bits;
+	/**
+	 * [in] epoch group IDs of entry to lookup
+	 * 2 element array with 2 ids. (Brd4 only)
+	 */
+	uint16_t *epochs;
+	/**
+	 * [in] ptr to structure containing flow delete handle
+	 */
+	uint64_t flow_handle;
+};
+
+/** insert em hash entry in internal table memory
+ *
+ * Internal:
+ *
+ * This API inserts an exact match entry into internal EM table memory
+ * of the specified direction.
+ *
+ * Note: The EM record is managed within the TruFlow core and not the
+ * application.
+ *
+ * Shadow copy of internal record table an association with hash and 1,2, or 4
+ * associated buckets
+ *
+ * External:
+ * This API inserts an exact match entry into DRAM EM table memory of the
+ * specified direction and table scope.
+ *
+ * When inserting an entry into an exact match table, the TruFlow library may
+ * need to allocate a dynamic bucket for the entry (Brd4 only).
+ *
+ * The insertion of duplicate entries in an EM table is not permitted.	If a
+ * TruFlow application can guarantee that it will never insert duplicates, it
+ * can disable duplicate checking by passing a zero value in the  dup_check
+ * parameter to this API.  This will optimize performance. Otherwise, the
+ * TruFlow library will enforce protection against inserting duplicate entries.
+ *
+ * Flow handle is defined in this document:
+ *
+ * https://docs.google.com
+ * /document/d/1NESu7RpTN3jwxbokaPfYORQyChYRmJgs40wMIRe8_-Q/edit
+ *
+ * Returns success or busy code.
+ *
+ */
+int tf_insert_em_entry(struct tf *tfp,
+		       struct tf_insert_em_entry_parms *parms);
+
+/** delete em hash entry table memory
+ *
+ * Internal:
+ *
+ * This API deletes an exact match entry from internal EM table memory of the
+ * specified direction. If a valid flow ptr is passed in then that takes
+ * precedence over the pointer to the complete key passed in.
+ *
+ *
+ * External:
+ *
+ * This API deletes an exact match entry from EM table memory of the specified
+ * direction and table scope. If a valid flow handle is passed in then that
+ * takes precedence over the pointer to the complete key passed in.
+ *
+ * The TruFlow library may release a dynamic bucket when an entry is deleted.
+ *
+ *
+ * Returns success or not found code
+ *
+ *
+ */
+int tf_delete_em_entry(struct tf *tfp,
+		       struct tf_delete_em_entry_parms *parms);
+
+/** search em hash entry table memory
+ *
+ * Internal:
+
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow (flow takes precedence) and direction.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * If flow_handle is set, search shadow copy.
+ *
+ * Otherwise, query the fw with key to get result.
+ *
+ * External:
+ *
+ * This API looks up an EM entry in table memory with the specified EM
+ * key or flow_handle (flow takes precedence), direction and table scope.
+ *
+ * The status will be one of: success or entry not found.  If the lookup
+ * succeeds, a pointer to the matching entry and the result record associated
+ * with the matching entry will be provided.
+ *
+ * Returns success or not found code
+ *
+ */
+int tf_search_em_entry(struct tf *tfp,
+		       struct tf_search_em_entry_parms *parms);
 #endif /* _TF_CORE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_em.c b/drivers/net/bnxt/tf_core/tf_em.c
new file mode 100644
index 0000000..7109eb1
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.c
@@ -0,0 +1,516 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#include <string.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "tf_core.h"
+#include "tf_em.h"
+#include "tf_msg.h"
+#include "tfp.h"
+#include "lookup3.h"
+#include "tf_ext_flow_handle.h"
+
+#include "bnxt.h"
+
+/* Enable EEM table dump
+ */
+#define TF_EEM_DUMP
+
+static struct tf_eem_64b_entry zero_key_entry;
+
+static uint32_t tf_em_get_key_mask(int num_entries)
+{
+	uint32_t mask = num_entries - 1;
+
+	if (num_entries & 0x7FFF)
+		return 0;
+
+	if (num_entries > (128 * 1024 * 1024))
+		return 0;
+
+	return mask;
+}
+
+/* CRC32i support for Key0 hash */
+#define ucrc32(ch, crc) (crc32tbl[((crc) ^ (ch)) & 0xff] ^ ((crc) >> 8))
+#define crc32(x, y) crc32i(~0, x, y)
+
+static const uint32_t crc32tbl[] = {	/* CRC polynomial 0xedb88320 */
+0x00000000, 0x77073096, 0xee0e612c, 0x990951ba,
+0x076dc419, 0x706af48f, 0xe963a535, 0x9e6495a3,
+0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91,
+0x1db71064, 0x6ab020f2, 0xf3b97148, 0x84be41de,
+0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec,
+0x14015c4f, 0x63066cd9, 0xfa0f3d63, 0x8d080df5,
+0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b,
+0x35b5a8fa, 0x42b2986c, 0xdbbbc9d6, 0xacbcf940,
+0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116,
+0x21b4f4b5, 0x56b3c423, 0xcfba9599, 0xb8bda50f,
+0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d,
+0x76dc4190, 0x01db7106, 0x98d220bc, 0xefd5102a,
+0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818,
+0x7f6a0dbb, 0x086d3d2d, 0x91646c97, 0xe6635c01,
+0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457,
+0x65b0d9c6, 0x12b7e950, 0x8bbeb8ea, 0xfcb9887c,
+0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2,
+0x4adfa541, 0x3dd895d7, 0xa4d1c46d, 0xd3d6f4fb,
+0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9,
+0x5005713c, 0x270241aa, 0xbe0b1010, 0xc90c2086,
+0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4,
+0x59b33d17, 0x2eb40d81, 0xb7bd5c3b, 0xc0ba6cad,
+0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,
+0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683,
+0xe3630b12, 0x94643b84, 0x0d6d6a3e, 0x7a6a5aa8,
+0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,
+0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe,
+0xf762575d, 0x806567cb, 0x196c3671, 0x6e6b06e7,
+0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5,
+0xd6d6a3e8, 0xa1d1937e, 0x38d8c2c4, 0x4fdff252,
+0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60,
+0xdf60efc3, 0xa867df55, 0x316e8eef, 0x4669be79,
+0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f,
+0xc5ba3bbe, 0xb2bd0b28, 0x2bb45a92, 0x5cb36a04,
+0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a,
+0x9c0906a9, 0xeb0e363f, 0x72076785, 0x05005713,
+0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,
+0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21,
+0x86d3d2d4, 0xf1d4e242, 0x68ddb3f8, 0x1fda836e,
+0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c,
+0x8f659eff, 0xf862ae69, 0x616bffd3, 0x166ccf45,
+0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db,
+0xaed16a4a, 0xd9d65adc, 0x40df0b66, 0x37d83bf0,
+0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6,
+0xbad03605, 0xcdd70693, 0x54de5729, 0x23d967bf,
+0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+static uint32_t crc32i(uint32_t crc, const uint8_t *buf, size_t len)
+{
+	int l;
+
+	for (l = (len - 1); l >= 0; l--) {
+		crc = ucrc32(buf[l], crc);
+	}
+
+	return ~crc;
+}
+
+static uint32_t tf_em_lkup_get_crc32_hash(struct tf_session *session,
+					  uint8_t *key,
+					  enum tf_dir dir)
+{
+	int i;
+	uint32_t index;
+	uint32_t val1, val2;
+	uint8_t temp[4];
+	uint8_t *kptr = key;
+
+	/* Do byte-wise XOR of the 52-byte HASH key first. */
+	index = *key;
+	kptr--;
+
+	for (i = TF_HW_EM_KEY_MAX_SIZE - 2; i >= 0; i--) {
+		index = index ^ *kptr;
+		kptr--;
+	}
+
+	/* Get seeds */
+	val1 = session->lkup_em_seed_mem[dir][index * 2];
+	val2 = session->lkup_em_seed_mem[dir][index * 2 + 1];
+
+	temp[3] = (uint8_t)(val1 >> 24);
+	temp[2] = (uint8_t)(val1 >> 16);
+	temp[1] = (uint8_t)(val1 >> 8);
+	temp[0] = (uint8_t)(val1 & 0xff);
+	val1 = 0;
+
+	/* Start with seed */
+	if (!(val2 & 0x1))
+		val1 = crc32i(~val1, temp, 4);
+
+	val1 = crc32i(~val1,
+		      (key - (TF_HW_EM_KEY_MAX_SIZE - 1)),
+		      TF_HW_EM_KEY_MAX_SIZE);
+
+	/* End with seed */
+	if (val2 & 0x1)
+		val1 = crc32i(~val1, temp, 4);
+
+	return val1;
+}
+
+static uint32_t tf_em_lkup_get_lookup3_hash(uint32_t lookup3_init_value,
+					    uint8_t *in_key)
+{
+	uint32_t val1;
+
+	val1 = hashword(((uint32_t *)in_key) + 1,
+			 TF_HW_EM_KEY_MAX_SIZE / (sizeof(uint32_t)),
+			 lookup3_init_value);
+
+	return val1;
+}
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type)
+{
+	int level = 0;
+	int page = offset / TF_EM_PAGE_SIZE;
+	void *addr = NULL;
+	struct tf_em_ctx_mem_info *ctx = &tbl_scope_cb->em_ctx_info[dir];
+
+	if (ctx == NULL)
+		return NULL;
+
+	if (dir != TF_DIR_RX && dir != TF_DIR_TX)
+		return NULL;
+
+	if (table_type < KEY0_TABLE || table_type > EFC_TABLE)
+		return NULL;
+
+	/*
+	 * Use the level according to the num_level of page table
+	 */
+	level = ctx->em_tables[table_type].num_lvl - 1;
+
+	addr = (void *)ctx->em_tables[table_type].pg_tbl[level].pg_va_tbl[page];
+
+	return addr;
+}
+
+/** Read Key table entry
+ *
+ * Entry is read in to entry
+ */
+static int tf_em_read_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)entry, (uint8_t *)page + entry_offset, entry_size);
+	return 0;
+}
+
+static int tf_em_write_entry(struct tf_tbl_scope_cb *tbl_scope_cb,
+				 struct tf_eem_64b_entry *entry,
+				 uint32_t entry_size,
+				 uint32_t index,
+				 enum tf_em_table_type table_type,
+				 enum tf_dir dir)
+{
+	void *page;
+	uint32_t entry_offset = (index * entry_size) % TF_EM_PAGE_SIZE;
+
+	page = tf_em_get_table_page(tbl_scope_cb,
+				    dir,
+				    (index * entry_size),
+				    table_type);
+
+	if (page == NULL)
+		return -EINVAL;
+
+	memcpy((uint8_t *)page + entry_offset, entry, entry_size);
+
+	return 0;
+}
+
+static int tf_em_entry_exists(struct tf_tbl_scope_cb *tbl_scope_cb,
+			       struct tf_eem_64b_entry *entry,
+			       uint32_t index,
+			       enum tf_em_table_type table_type,
+			       enum tf_dir dir)
+{
+	int rc;
+	struct tf_eem_64b_entry table_entry;
+
+	rc = tf_em_read_entry(tbl_scope_cb,
+			      &table_entry,
+			      TF_EM_KEY_RECORD_SIZE,
+			      index,
+			      table_type,
+			      dir);
+
+	if (rc != 0)
+		return -EINVAL;
+
+	if (table_entry.hdr.word1 & (1 << TF_LKUP_RECORD_VALID_SHIFT)) {
+		if (entry != NULL) {
+			if (memcmp(&table_entry,
+				   entry,
+				   TF_EM_KEY_RECORD_SIZE) == 0)
+				return -EEXIST;
+		} else {
+			return -EEXIST;
+		}
+
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static void tf_em_create_key_entry(struct tf_eem_entry_hdr *result,
+				    uint8_t	       *in_key,
+				    struct tf_eem_64b_entry *key_entry)
+{
+	key_entry->hdr.word1 = result->word1;
+
+	if (result->word1 & TF_LKUP_RECORD_ACT_REC_INT_MASK)
+		key_entry->hdr.pointer = result->pointer;
+	else
+		key_entry->hdr.pointer = result->pointer;
+
+	memcpy(key_entry->key, in_key, TF_HW_EM_KEY_MAX_SIZE + 4);
+}
+
+/* tf_em_select_inject_table
+ *
+ * Returns:
+ * 0 - Key does not exist in either table and can be inserted
+ *		at "index" in table "table".
+ * EEXIST  - Key does exist in table at "index" in table "table".
+ * TF_ERR     - Something went horribly wrong.
+ */
+static int tf_em_select_inject_table(struct tf_tbl_scope_cb	*tbl_scope_cb,
+					  enum tf_dir dir,
+					  struct tf_eem_64b_entry *entry,
+					  uint32_t key0_hash,
+					  uint32_t key1_hash,
+					  uint32_t *index,
+					  enum tf_em_table_type *table)
+{
+	int key0_entry;
+	int key1_entry;
+
+	/*
+	 * Check KEY0 table.
+	 */
+	key0_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key0_hash,
+					 KEY0_TABLE,
+					 dir);
+
+	/*
+	 * Check KEY1 table.
+	 */
+	key1_entry = tf_em_entry_exists(tbl_scope_cb,
+					 entry,
+					 key1_hash,
+					 KEY1_TABLE,
+					 dir);
+
+	if (key0_entry == -EEXIST) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return -EEXIST;
+	} else if (key1_entry == -EEXIST) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return -EEXIST;
+	} else if (key0_entry == 0) {
+		*table = KEY0_TABLE;
+		*index = key0_hash;
+		return 0;
+	} else if (key1_entry == 0) {
+		*table = KEY1_TABLE;
+		*index = key1_hash;
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+/** insert EEM entry API
+ *
+ * returns:
+ *  0
+ *  TF_ERR	    - unable to get lock
+ *
+ * insert callback returns:
+ *   0
+ *   TF_ERR_EM_DUP  - key is already in table
+ */
+int tf_insert_eem_entry(struct tf_session	   *session,
+			struct tf_tbl_scope_cb	   *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms)
+{
+	uint32_t	   mask;
+	uint32_t	   key0_hash;
+	uint32_t	   key1_hash;
+	uint32_t	   key0_index;
+	uint32_t	   key1_index;
+	struct tf_eem_64b_entry key_entry;
+	uint32_t	   index;
+	enum tf_em_table_type table_type;
+	uint32_t	   gfid;
+	int		   num_of_entry;
+
+	/* Get mask to use on hash */
+	mask = tf_em_get_key_mask(tbl_scope_cb->em_ctx_info[parms->dir].em_tables[KEY0_TABLE].num_entries);
+
+	if (!mask)
+		return -EINVAL;
+
+	num_of_entry = TF_HW_EM_KEY_MAX_SIZE + 4;
+
+	key0_hash = tf_em_lkup_get_crc32_hash(session,
+				      &parms->key[num_of_entry] - 1,
+				      parms->dir);
+	key0_index = key0_hash & mask;
+
+	key1_hash =
+	   tf_em_lkup_get_lookup3_hash(session->lkup_lkup3_init_cfg[parms->dir],
+					parms->key);
+	key1_index = key1_hash & mask;
+
+	/*
+	 * Use the "result" arg to populate all of the key entry then
+	 * store the byte swapped "raw" entry in a local copy ready
+	 * for insertion in to the table.
+	 */
+	tf_em_create_key_entry((struct tf_eem_entry_hdr *)parms->em_record,
+				((uint8_t *)parms->key),
+				&key_entry);
+
+	/*
+	 * Find which table to use
+	 */
+	if (tf_em_select_inject_table(tbl_scope_cb,
+				      parms->dir,
+				      &key_entry,
+				      key0_index,
+				      key1_index,
+				      &index,
+				      &table_type) == 0) {
+		if (table_type == KEY0_TABLE) {
+			TF_SET_GFID(gfid,
+				    key0_index,
+				    KEY0_TABLE);
+		} else {
+			TF_SET_GFID(gfid,
+				    key1_index,
+				    KEY1_TABLE);
+		}
+
+		/*
+		 * Inject
+		 */
+		if (tf_em_write_entry(tbl_scope_cb,
+				      &key_entry,
+				      TF_EM_KEY_RECORD_SIZE,
+				      index,
+				      table_type,
+				      parms->dir) == 0) {
+			TF_SET_FLOW_ID(parms->flow_id,
+				       gfid,
+				       TF_GFID_TABLE_EXTERNAL,
+				       parms->dir);
+			TF_SET_FIELDS_IN_FLOW_HANDLE(parms->flow_handle,
+						     0,
+						     0,
+						     0,
+						     index,
+						     0,
+						     table_type);
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+/** delete EEM hash entry API
+ *
+ * returns:
+ *   0
+ *   -EINVAL	  - parameter error
+ *   TF_NO_SESSION    - bad session ID
+ *   TF_ERR_TBL_SCOPE - invalid table scope
+ *   TF_ERR_TBL_IF    - invalid table interface
+ *
+ * insert callback returns
+ *   0
+ *   TF_NO_EM_MATCH - entry not found
+ */
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms)
+{
+	struct tf_session	   *session;
+	struct tf_tbl_scope_cb	   *tbl_scope_cb;
+	enum tf_em_table_type hash_type;
+	uint32_t index;
+
+	if (parms == NULL)
+		return -EINVAL;
+
+	session = (struct tf_session *)tfp->session->core_data;
+	if (session == NULL)
+		return -EINVAL;
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	if (parms->flow_handle == 0)
+		return -EINVAL;
+
+	TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(parms->flow_handle, hash_type);
+	TF_GET_INDEX_FROM_FLOW_HANDLE(parms->flow_handle, index);
+
+	if (tf_em_entry_exists(tbl_scope_cb,
+			       NULL,
+			       index,
+			       hash_type,
+			       parms->dir) == -EEXIST) {
+		tf_em_write_entry(tbl_scope_cb,
+				  &zero_key_entry,
+				  TF_EM_KEY_RECORD_SIZE,
+				  index,
+				  hash_type,
+				  parms->dir);
+
+		return 0;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h
new file mode 100644
index 0000000..8a3584f
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_em.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EM_H_
+#define _TF_EM_H_
+
+#include "tf_core.h"
+#include "tf_session.h"
+
+#define TF_HW_EM_KEY_MAX_SIZE 52
+#define TF_EM_KEY_RECORD_SIZE 64
+
+/** EEM Entry header
+ *
+ */
+struct tf_eem_entry_hdr {
+	uint32_t pointer;
+	uint32_t word1;  /*
+			  * The header is made up of two words,
+			  * this is the first word. This field has multiple
+			  * subfields, there is no suitable single name for
+			  * it so just going with word1.
+			  */
+#define TF_LKUP_RECORD_VALID_SHIFT 31
+#define TF_LKUP_RECORD_VALID_MASK 0x80000000
+#define TF_LKUP_RECORD_L1_CACHEABLE_SHIFT 30
+#define TF_LKUP_RECORD_L1_CACHEABLE_MASK 0x40000000
+#define TF_LKUP_RECORD_STRENGTH_SHIFT 28
+#define TF_LKUP_RECORD_STRENGTH_MASK 0x30000000
+#define TF_LKUP_RECORD_RESERVED_SHIFT 17
+#define TF_LKUP_RECORD_RESERVED_MASK 0x0FFE0000
+#define TF_LKUP_RECORD_KEY_SIZE_SHIFT 8
+#define TF_LKUP_RECORD_KEY_SIZE_MASK 0x0001FF00
+#define TF_LKUP_RECORD_ACT_REC_SIZE_SHIFT 3
+#define TF_LKUP_RECORD_ACT_REC_SIZE_MASK 0x000000F8
+#define TF_LKUP_RECORD_ACT_REC_INT_SHIFT 2
+#define TF_LKUP_RECORD_ACT_REC_INT_MASK 0x00000004
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_SHIFT 1
+#define TF_LKUP_RECORD_EXT_FLOW_CTR_MASK 0x00000002
+#define TF_LKUP_RECORD_ACT_PTR_MSB_SHIFT 0
+#define TF_LKUP_RECORD_ACT_PTR_MSB_MASK 0x00000001
+};
+
+/** EEM Entry
+ *  Each EEM entry is 512-bit (64-bytes)
+ */
+struct tf_eem_64b_entry {
+	/** Key is 448 bits - 56 bytes */
+	uint8_t key[TF_EM_KEY_RECORD_SIZE - sizeof(struct tf_eem_entry_hdr)];
+	/** Header is 8 bytes long */
+	struct tf_eem_entry_hdr hdr;
+};
+
+/**
+ * Allocates EEM Table scope
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ *   -ENOMEM - Out of memory
+ */
+int tf_alloc_eem_tbl_scope(struct tf *tfp,
+			   struct tf_alloc_tbl_scope_parms *parms);
+
+/**
+ * Free's EEM Table scope control block
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+int tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			     struct tf_free_tbl_scope_parms *parms);
+
+/**
+ * Function to search for table scope control block structure
+ * with specified table scope ID.
+ *
+ * [in] session
+ *   Session to use for the search of the table scope control block
+ * [in] tbl_scope_id
+ *   Table scope ID to search for
+ *
+ * Returns:
+ *  Pointer to the found table scope control block struct or NULL if
+ *  table scope control block struct not found
+ */
+struct tf_tbl_scope_cb *tbl_scope_cb_find(struct tf_session *session,
+					  uint32_t tbl_scope_id);
+
+int tf_insert_eem_entry(struct tf_session *session,
+			struct tf_tbl_scope_cb *tbl_scope_cb,
+			struct tf_insert_em_entry_parms *parms);
+
+int tf_delete_eem_entry(struct tf *tfp,
+			struct tf_delete_em_entry_parms *parms);
+
+void *tf_em_get_table_page(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   enum tf_dir dir,
+			   uint32_t offset,
+			   enum tf_em_table_type table_type);
+
+#endif /* _TF_EM_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
new file mode 100644
index 0000000..417a99c
--- /dev/null
+++ b/drivers/net/bnxt/tf_core/tf_ext_flow_handle.h
@@ -0,0 +1,166 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019-2020 Broadcom
+ * All rights reserved.
+ */
+
+#ifndef _TF_EXT_FLOW_HANDLE_H_
+#define _TF_EXT_FLOW_HANDLE_H_
+
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK	0x00000000F0000000ULL
+#define TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT	28
+#define TF_FLOW_TYPE_FLOW_HANDLE_MASK		0x00000000000000F0ULL
+#define TF_FLOW_TYPE_FLOW_HANDLE_SFT		4
+#define TF_FLAGS_FLOW_HANDLE_MASK		0x000000000000000FULL
+#define TF_FLAGS_FLOW_HANDLE_SFT		0
+#define TF_INDEX_FLOW_HANDLE_MASK		0xFFFFFFF000000000ULL
+#define TF_INDEX_FLOW_HANDLE_SFT		36
+#define TF_ENTRY_NUM_FLOW_HANDLE_MASK		0x0000000E00000000ULL
+#define TF_ENTRY_NUM_FLOW_HANDLE_SFT		33
+#define TF_HASH_TYPE_FLOW_HANDLE_MASK		0x0000000100000000ULL
+#define TF_HASH_TYPE_FLOW_HANDLE_SFT		32
+
+#define TF_FLOW_HANDLE_MASK (TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK |	\
+				TF_FLOW_TYPE_FLOW_HANDLE_MASK |		\
+				TF_FLAGS_FLOW_HANDLE_MASK |		\
+				TF_INDEX_FLOW_HANDLE_MASK |		\
+				TF_ENTRY_NUM_FLOW_HANDLE_MASK |		\
+				TF_HASH_TYPE_FLOW_HANDLE_MASK)
+
+#define TF_GET_FIELDS_FROM_FLOW_HANDLE(flow_handle,			\
+				       num_key_entries,			\
+				       flow_type,			\
+				       flags,				\
+				       index,				\
+				       entry_num,			\
+				       hash_type)			\
+do {									\
+	(num_key_entries) = \
+		(((flow_handle) & TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK) >> \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT);			\
+	(flow_type) = (((flow_handle) & TF_FLOW_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_FLOW_TYPE_FLOW_HANDLE_SFT);			\
+	(flags) = (((flow_handle) & TF_FLAGS_FLOW_HANDLE_MASK) >>	\
+		     TF_FLAGS_FLOW_HANDLE_SFT);				\
+	(index) = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>	\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+	(entry_num) = (((flow_handle) & TF_ENTRY_NUM_FLOW_HANDLE_MASK) >> \
+		     TF_ENTRY_NUM_FLOW_HANDLE_SFT);			\
+	(hash_type) = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >> \
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+#define TF_SET_FIELDS_IN_FLOW_HANDLE(flow_handle,			\
+				     num_key_entries,			\
+				     flow_type,				\
+				     flags,				\
+				     index,				\
+				     entry_num,				\
+				     hash_type)				\
+do {									\
+	(flow_handle) &= ~TF_FLOW_HANDLE_MASK;				\
+	(flow_handle) |= \
+		(((num_key_entries) << TF_NUM_KEY_ENTRIES_FLOW_HANDLE_SFT) & \
+		 TF_NUM_KEY_ENTRIES_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flow_type) << TF_FLOW_TYPE_FLOW_HANDLE_SFT) & \
+			TF_FLOW_TYPE_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= (((flags) << TF_FLAGS_FLOW_HANDLE_SFT) &	\
+			TF_FLAGS_FLOW_HANDLE_MASK);			\
+	(flow_handle) |= ((((uint64_t)index) << TF_INDEX_FLOW_HANDLE_SFT) & \
+			TF_INDEX_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)entry_num) << TF_ENTRY_NUM_FLOW_HANDLE_SFT) & \
+		 TF_ENTRY_NUM_FLOW_HANDLE_MASK);			\
+	(flow_handle) |=						\
+		((((uint64_t)hash_type) << TF_HASH_TYPE_FLOW_HANDLE_SFT) & \
+		 TF_HASH_TYPE_FLOW_HANDLE_MASK);			\
+} while (0)
+#define TF_SET_FIELDS_IN_WH_FLOW_HANDLE TF_SET_FIELDS_IN_FLOW_HANDLE
+
+#define TF_GET_INDEX_FROM_FLOW_HANDLE(flow_handle,			\
+				      index)				\
+do {									\
+	index = (((flow_handle) & TF_INDEX_FLOW_HANDLE_MASK) >>		\
+		     TF_INDEX_FLOW_HANDLE_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_FLOW_HANDLE(flow_handle,			\
+					  hash_type)			\
+do {									\
+	hash_type = (((flow_handle) & TF_HASH_TYPE_FLOW_HANDLE_MASK) >>	\
+		     TF_HASH_TYPE_FLOW_HANDLE_SFT);			\
+} while (0)
+
+/*
+ * 32 bit Flow ID handlers
+ */
+#define TF_GFID_FLOW_ID_MASK		0xFFFFFFF0UL
+#define TF_GFID_FLOW_ID_SFT		4
+#define TF_FLAG_FLOW_ID_MASK		0x00000002UL
+#define TF_FLAG_FLOW_ID_SFT		1
+#define TF_DIR_FLOW_ID_MASK		0x00000001UL
+#define TF_DIR_FLOW_ID_SFT		0
+
+#define TF_SET_FLOW_ID(flow_id, gfid, flag, dir)			\
+do {									\
+	(flow_id) &= ~(TF_GFID_FLOW_ID_MASK |				\
+		     TF_FLAG_FLOW_ID_MASK |				\
+		     TF_DIR_FLOW_ID_MASK);				\
+	(flow_id) |= (((gfid) << TF_GFID_FLOW_ID_SFT) &			\
+		    TF_GFID_FLOW_ID_MASK) |				\
+		(((flag) << TF_FLAG_FLOW_ID_SFT) &			\
+		 TF_FLAG_FLOW_ID_MASK) |				\
+		(((dir) << TF_DIR_FLOW_ID_SFT) &			\
+		 TF_DIR_FLOW_ID_MASK);					\
+} while (0)
+
+#define TF_GET_GFID_FROM_FLOW_ID(flow_id, gfid)				\
+do {									\
+	gfid = (((flow_id) & TF_GFID_FLOW_ID_MASK) >>			\
+		TF_GFID_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_DIR_FROM_FLOW_ID(flow_id, dir)				\
+do {									\
+	dir = (((flow_id) & TF_DIR_FLOW_ID_MASK) >>			\
+		TF_DIR_FLOW_ID_SFT);					\
+} while (0)
+
+#define TF_GET_FLAG_FROM_FLOW_ID(flow_id, flag)				\
+do {									\
+	flag = (((flow_id) & TF_FLAG_FLOW_ID_MASK) >>			\
+		TF_FLAG_FLOW_ID_SFT);					\
+} while (0)
+
+/*
+ * 32 bit GFID handlers
+ */
+#define TF_HASH_INDEX_GFID_MASK	0x07FFFFFFUL
+#define TF_HASH_INDEX_GFID_SFT	0
+#define TF_HASH_TYPE_GFID_MASK	0x08000000UL
+#define TF_HASH_TYPE_GFID_SFT	27
+
+#define TF_GFID_TABLE_INTERNAL 0
+#define TF_GFID_TABLE_EXTERNAL 1
+
+#define TF_SET_GFID(gfid, index, type)					\
+do {									\
+	gfid = (((index) << TF_HASH_INDEX_GFID_SFT) &			\
+		TF_HASH_INDEX_GFID_MASK) |				\
+		(((type) << TF_HASH_TYPE_GFID_SFT) &			\
+		 TF_HASH_TYPE_GFID_MASK);				\
+} while (0)
+
+#define TF_GET_HASH_INDEX_FROM_GFID(gfid, index)			\
+do {									\
+	index = (((gfid) & TF_HASH_INDEX_GFID_MASK) >>			\
+		TF_HASH_INDEX_GFID_SFT);				\
+} while (0)
+
+#define TF_GET_HASH_TYPE_FROM_GFID(gfid, type)				\
+do {									\
+	type = (((gfid) & TF_HASH_TYPE_GFID_MASK) >>			\
+		TF_HASH_TYPE_GFID_SFT);					\
+} while (0)
+
+
+#endif /* _TF_EXT_FLOW_HANDLE_H_ */
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index b9ed127..c507ec7 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -869,6 +869,177 @@ tf_msg_session_sram_resc_flush(struct tf *tfp,
 	return tfp_le_to_cpu_32(parms.tf_resp_code);
 }
 
+/**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_rgtr_input req = { 0 };
+	struct hwrm_tf_ctxt_mem_rgtr_output resp = { 0 };
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.page_level = page_lvl;
+	req.page_size = page_size;
+	req.page_dir = tfp_cpu_to_le_64(dma_addr);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_RGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	*ctx_id = tfp_le_to_cpu_16(resp.ctx_id);
+
+	return rc;
+}
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t  *ctx_id)
+{
+	int rc;
+	struct hwrm_tf_ctxt_mem_unrgtr_input req = {0};
+	struct hwrm_tf_ctxt_mem_unrgtr_output resp = {0};
+	struct tfp_send_msg_parms parms = { 0 };
+
+	req.ctx_id = tfp_cpu_to_le_32(*ctx_id);
+
+	parms.tf_type = HWRM_TF_CTXT_MEM_UNRGTR;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps)
+{
+	int rc;
+	struct hwrm_tf_ext_em_qcaps_input  req = {0};
+	struct hwrm_tf_ext_em_qcaps_output resp = { 0 };
+	uint32_t             flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+
+	parms.tf_type = HWRM_TF_EXT_EM_QCAPS;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	if (rc)
+		return rc;
+
+	em_caps->supported = tfp_le_to_cpu_32(resp.supported);
+	em_caps->max_entries_supported =
+		tfp_le_to_cpu_32(resp.max_entries_supported);
+	em_caps->key_entry_size = tfp_le_to_cpu_16(resp.key_entry_size);
+	em_caps->record_entry_size =
+		tfp_le_to_cpu_16(resp.record_entry_size);
+	em_caps->efc_entry_size = tfp_le_to_cpu_16(resp.efc_entry_size);
+
+	return rc;
+}
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t   num_entries,
+		  uint16_t   key0_ctx_id,
+		  uint16_t   key1_ctx_id,
+		  uint16_t   record_ctx_id,
+		  uint16_t   efc_ctx_id,
+		  int        dir)
+{
+	int rc;
+	struct hwrm_tf_ext_em_cfg_input  req = {0};
+	struct hwrm_tf_ext_em_cfg_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	flags |= HWRM_TF_EXT_EM_QCAPS_INPUT_FLAGS_PREFERRED_OFFLOAD;
+
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.num_entries = tfp_cpu_to_le_32(num_entries);
+
+	req.key0_ctx_id = tfp_cpu_to_le_16(key0_ctx_id);
+	req.key1_ctx_id = tfp_cpu_to_le_16(key1_ctx_id);
+	req.record_ctx_id = tfp_cpu_to_le_16(record_ctx_id);
+	req.efc_ctx_id = tfp_cpu_to_le_16(efc_ctx_id);
+
+	parms.tf_type = HWRM_TF_EXT_EM_CFG;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op)
+{
+	int rc;
+	struct hwrm_tf_ext_em_op_input  req = {0};
+	struct hwrm_tf_ext_em_op_output resp = {0};
+	uint32_t flags;
+	struct tfp_send_msg_parms parms = { 0 };
+
+	flags = (dir == TF_DIR_TX ? HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_EXT_EM_CFG_INPUT_FLAGS_DIR_RX);
+	req.flags = tfp_cpu_to_le_32(flags);
+	req.op = tfp_cpu_to_le_16(op);
+
+	parms.tf_type = HWRM_TF_EXT_EM_OP;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
+
+	rc = tfp_send_msg_direct(tfp,
+				 &parms);
+	return rc;
+}
+
 int
 tf_msg_set_tbl_entry(struct tf *tfp,
 		     enum tf_dir dir,
diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h
index 9055b16..b8d8c1e 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.h
+++ b/drivers/net/bnxt/tf_core/tf_msg.h
@@ -122,6 +122,46 @@ int tf_msg_session_sram_resc_flush(struct tf *tfp,
 				   struct tf_rm_entry *sram_entry);
 
 /**
+ * Sends EM mem register request to Firmware
+ */
+int tf_msg_em_mem_rgtr(struct tf *tfp,
+		       int           page_lvl,
+		       int           page_size,
+		       uint64_t      dma_addr,
+		       uint16_t     *ctx_id);
+
+/**
+ * Sends EM mem unregister request to Firmware
+ */
+int tf_msg_em_mem_unrgtr(struct tf *tfp,
+			 uint16_t     *ctx_id);
+
+/**
+ * Sends EM qcaps request to Firmware
+ */
+int tf_msg_em_qcaps(struct tf *tfp,
+		    int dir,
+		    struct tf_em_caps *em_caps);
+
+/**
+ * Sends EM config request to Firmware
+ */
+int tf_msg_em_cfg(struct tf *tfp,
+		  uint32_t      num_entries,
+		  uint16_t      key0_ctx_id,
+		  uint16_t      key1_ctx_id,
+		  uint16_t      record_ctx_id,
+		  uint16_t      efc_ctx_id,
+		  int           dir);
+
+/**
+ * Sends EM operation request to Firmware
+ */
+int tf_msg_em_op(struct tf *tfp,
+		 int        dir,
+		 uint16_t   op);
+
+/**
  * Sends tcam entry 'set' to the Firmware.
  *
  * [in] tfp
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 14bf4ef..632df4b 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -15,7 +15,7 @@
 #include "hsi_struct_def_dpdk.h"
 
 #include "tf_core.h"
-#include "tf_session.h"
+#include "tf_em.h"
 #include "tf_msg.h"
 #include "tfp.h"
 #include "hwrm_tf.h"
@@ -30,6 +30,1366 @@
 /* Number of pointers per page_size */
 #define	MAX_PAGE_PTRS(page_size)  ((page_size) / sizeof(void *))
 
+#define TF_EM_PG_SZ_4K        (1 << 12)
+#define TF_EM_PG_SZ_8K        (1 << 13)
+#define TF_EM_PG_SZ_64K       (1 << 16)
+#define TF_EM_PG_SZ_256K      (1 << 18)
+#define TF_EM_PG_SZ_1M        (1 << 20)
+#define TF_EM_PG_SZ_2M        (1 << 21)
+#define TF_EM_PG_SZ_4M        (1 << 22)
+#define TF_EM_PG_SZ_1G        (1 << 30)
+
+#define	TF_EM_CTX_ID_INVALID   0xFFFF
+
+#define	TF_EM_MIN_ENTRIES     (1 << 15) /* 32K */
+#define	TF_EM_MAX_ENTRIES     (1 << 27) /* 128M */
+
+/**
+ * Function to free a page table
+ *
+ * [in] tp
+ *   Pointer to the page table to free
+ */
+static void
+tf_em_free_pg_tbl(struct tf_em_page_tbl *tp)
+{
+	uint32_t i;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		if (!tp->pg_va_tbl[i]) {
+			PMD_DRV_LOG(WARNING,
+				    "No mapping for page: %d table: %016" PRIu64 "\n",
+				    i,
+				    (uint64_t)tp);
+			continue;
+		}
+
+		tfp_free(tp->pg_va_tbl[i]);
+		tp->pg_va_tbl[i] = NULL;
+	}
+
+	tp->pg_count = 0;
+	tfp_free(tp->pg_va_tbl);
+	tp->pg_va_tbl = NULL;
+	tfp_free(tp->pg_pa_tbl);
+	tp->pg_pa_tbl = NULL;
+}
+
+/**
+ * Function to free an EM table
+ *
+ * [in] tbl
+ *   Pointer to the EM table to free
+ */
+static void
+tf_em_free_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		PMD_DRV_LOG(INFO,
+			   "EEM: Freeing page table: size %u lvl %d cnt %u\n",
+			   TF_EM_PAGE_SIZE,
+			    i,
+			    tp->pg_count);
+
+		tf_em_free_pg_tbl(tp);
+	}
+
+	tbl->l0_addr = NULL;
+	tbl->l0_dma_addr = 0;
+	tbl->num_lvl = 0;
+	tbl->num_data_pages = 0;
+}
+
+/**
+ * Allocation of page tables
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] pg_count
+ *   Page count to allocate
+ *
+ * [in] pg_size
+ *   Size of each page
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_pg_tbl(struct tf_em_page_tbl *tp,
+		   uint32_t pg_count,
+		   uint32_t pg_size)
+{
+	uint32_t i;
+	struct tfp_calloc_parms parms;
+
+	parms.nitems = pg_count;
+	parms.size = sizeof(void *);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0)
+		return -ENOMEM;
+
+	tp->pg_va_tbl = parms.mem_va;
+
+	if (tfp_calloc(&parms) != 0) {
+		tfp_free(tp->pg_va_tbl);
+		return -ENOMEM;
+	}
+
+	tp->pg_pa_tbl = parms.mem_va;
+
+	tp->pg_count = 0;
+	tp->pg_size = pg_size;
+
+	for (i = 0; i < pg_count; i++) {
+		parms.nitems = 1;
+		parms.size = pg_size;
+		parms.alignment = TF_EM_PAGE_ALIGNMENT;
+
+		if (tfp_calloc(&parms) != 0)
+			goto cleanup;
+
+		tp->pg_pa_tbl[i] = (uint64_t)parms.mem_pa;
+		tp->pg_va_tbl[i] = parms.mem_va;
+
+		memset(tp->pg_va_tbl[i], 0, pg_size);
+		tp->pg_count++;
+	}
+
+	return 0;
+
+cleanup:
+	tf_em_free_pg_tbl(tp);
+	return -ENOMEM;
+}
+
+/**
+ * Allocates EM page tables
+ *
+ * [in] tbl
+ *   Table to allocate pages for
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of memmory
+ */
+static int
+tf_em_alloc_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp;
+	int rc = 0;
+	int i;
+	uint32_t j;
+
+	for (i = 0; i < tbl->num_lvl; i++) {
+		tp = &tbl->pg_tbl[i];
+
+		rc = tf_em_alloc_pg_tbl(tp,
+					tbl->page_cnt[i],
+					TF_EM_PAGE_SIZE);
+		if (rc) {
+			PMD_DRV_LOG(WARNING,
+				"Failed to allocate page table: lvl: %d\n",
+				i);
+			goto cleanup;
+		}
+
+		for (j = 0; j < tp->pg_count; j++) {
+			PMD_DRV_LOG(INFO,
+				"EEM: Allocated page table: size %u lvl %d cnt"
+				" %u VA:%p PA:%p\n",
+				TF_EM_PAGE_SIZE,
+				i,
+				tp->pg_count,
+				(uint32_t *)tp->pg_va_tbl[j],
+				(uint32_t *)tp->pg_pa_tbl[j]);
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_free_page_table(tbl);
+	return rc;
+}
+
+/**
+ * Links EM page tables
+ *
+ * [in] tp
+ *   Pointer to page table
+ *
+ * [in] tp_next
+ *   Pointer to the next page table
+ *
+ * [in] set_pte_last
+ *   Flag controlling if the page table is last
+ */
+static void
+tf_em_link_page_table(struct tf_em_page_tbl *tp,
+		      struct tf_em_page_tbl *tp_next,
+		      bool set_pte_last)
+{
+	uint64_t *pg_pa = tp_next->pg_pa_tbl;
+	uint64_t *pg_va;
+	uint64_t valid;
+	uint32_t k = 0;
+	uint32_t i;
+	uint32_t j;
+
+	for (i = 0; i < tp->pg_count; i++) {
+		pg_va = tp->pg_va_tbl[i];
+
+		for (j = 0; j < MAX_PAGE_PTRS(tp->pg_size); j++) {
+			if (k == tp_next->pg_count - 2 && set_pte_last)
+				valid = PTU_PTE_NEXT_TO_LAST | PTU_PTE_VALID;
+			else if (k == tp_next->pg_count - 1 && set_pte_last)
+				valid = PTU_PTE_LAST | PTU_PTE_VALID;
+			else
+				valid = PTU_PTE_VALID;
+
+			pg_va[j] = tfp_cpu_to_le_64(pg_pa[k] | valid);
+			if (++k >= tp_next->pg_count)
+				return;
+		}
+	}
+}
+
+/**
+ * Setup a EM page table
+ *
+ * [in] tbl
+ *   Pointer to EM page table
+ */
+static void
+tf_em_setup_page_table(struct tf_em_table *tbl)
+{
+	struct tf_em_page_tbl *tp_next;
+	struct tf_em_page_tbl *tp;
+	bool set_pte_last = 0;
+	int i;
+
+	for (i = 0; i < tbl->num_lvl - 1; i++) {
+		tp = &tbl->pg_tbl[i];
+		tp_next = &tbl->pg_tbl[i + 1];
+		if (i == tbl->num_lvl - 2)
+			set_pte_last = 1;
+		tf_em_link_page_table(tp, tp_next, set_pte_last);
+	}
+
+	tbl->l0_addr = tbl->pg_tbl[PT_LVL_0].pg_va_tbl[0];
+	tbl->l0_dma_addr = tbl->pg_tbl[PT_LVL_0].pg_pa_tbl[0];
+}
+
+/**
+ * Given the page size, size of each data item (entry size),
+ * and the total number of entries needed, determine the number
+ * of page table levels and the number of data pages required.
+ *
+ * [in] page_size
+ *   Page size
+ *
+ * [in] entry_size
+ *   Entry size
+ *
+ * [in] num_entries
+ *   Number of entries needed
+ *
+ * [out] num_data_pages
+ *   Number of pages required
+ *
+ * Returns:
+ *   Success  - Number of EM page levels required
+ *   -ENOMEM  - Out of memory
+ */
+static int
+tf_em_size_page_tbl_lvl(uint32_t page_size,
+			uint32_t entry_size,
+			uint32_t num_entries,
+			uint64_t *num_data_pages)
+{
+	uint64_t lvl_data_size = page_size;
+	int lvl = PT_LVL_0;
+	uint64_t data_size;
+
+	*num_data_pages = 0;
+	data_size = (uint64_t)num_entries * entry_size;
+
+	while (lvl_data_size < data_size) {
+		lvl++;
+
+		if (lvl == PT_LVL_1)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				page_size;
+		else if (lvl == PT_LVL_2)
+			lvl_data_size = (uint64_t)MAX_PAGE_PTRS(page_size) *
+				MAX_PAGE_PTRS(page_size) * page_size;
+		else
+			return -ENOMEM;
+	}
+
+	*num_data_pages = roundup(data_size, page_size) / page_size;
+
+	return lvl;
+}
+
+/**
+ * Return the number of page table pages needed to
+ * reference the given number of next level pages.
+ *
+ * [in] num_pages
+ *   Number of EM pages
+ *
+ * [in] page_size
+ *   Size of each EM page
+ *
+ * Returns:
+ *   Number of EM page table pages
+ */
+static uint32_t
+tf_em_page_tbl_pgcnt(uint32_t num_pages,
+		     uint32_t page_size)
+{
+	return roundup(num_pages, MAX_PAGE_PTRS(page_size)) /
+		       MAX_PAGE_PTRS(page_size);
+	return 0;
+}
+
+/**
+ * Given the number of data pages, page_size and the maximum
+ * number of page table levels (already determined), size
+ * the number of page table pages required at each level.
+ *
+ * [in] max_lvl
+ *   Max number of levels
+ *
+ * [in] num_data_pages
+ *   Number of EM data pages
+ *
+ * [in] page_size
+ *   Size of an EM page
+ *
+ * [out] *page_cnt
+ *   EM page count
+ */
+static void
+tf_em_size_page_tbls(int max_lvl,
+		     uint64_t num_data_pages,
+		     uint32_t page_size,
+		     uint32_t *page_cnt)
+{
+	if (max_lvl == PT_LVL_0) {
+		page_cnt[PT_LVL_0] = num_data_pages;
+	} else if (max_lvl == PT_LVL_1) {
+		page_cnt[PT_LVL_1] = num_data_pages;
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else if (max_lvl == PT_LVL_2) {
+		page_cnt[PT_LVL_2] = num_data_pages;
+		page_cnt[PT_LVL_1] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_2], page_size);
+		page_cnt[PT_LVL_0] =
+		tf_em_page_tbl_pgcnt(page_cnt[PT_LVL_1], page_size);
+	} else {
+		return;
+	}
+}
+
+/**
+ * Size the EM table based on capabilities
+ *
+ * [in] tbl
+ *   EM table to size
+ *
+ * Returns:
+ *   0        - Success
+ *   - EINVAL - Parameter error
+ *   - ENOMEM - Out of memory
+ */
+static int
+tf_em_size_table(struct tf_em_table *tbl)
+{
+	uint64_t num_data_pages;
+	uint32_t *page_cnt;
+	int max_lvl;
+	uint32_t num_entries;
+	uint32_t cnt = TF_EM_MIN_ENTRIES;
+
+	/* Ignore entry if both size and number are zero */
+	if (!tbl->entry_size && !tbl->num_entries)
+		return 0;
+
+	/* If only one is set then error */
+	if (!tbl->entry_size || !tbl->num_entries)
+		return -EINVAL;
+
+	/* Determine number of page table levels and the number
+	 * of data pages needed to process the given eem table.
+	 */
+	if (tbl->type == RECORD_TABLE) {
+		/*
+		 * For action records just a memory size is provided. Work
+		 * backwards to resolve to number of entries
+		 */
+		num_entries = tbl->num_entries / tbl->entry_size;
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			num_entries = TF_EM_MIN_ENTRIES;
+		} else {
+			while (num_entries > cnt && cnt <= TF_EM_MAX_ENTRIES)
+				cnt *= 2;
+			num_entries = cnt;
+		}
+	} else {
+		num_entries = tbl->num_entries;
+	}
+
+	max_lvl = tf_em_size_page_tbl_lvl(TF_EM_PAGE_SIZE,
+					  tbl->entry_size,
+					  tbl->num_entries,
+					  &num_data_pages);
+	if (max_lvl < 0) {
+		PMD_DRV_LOG(WARNING, "EEM: Failed to size page table levels\n");
+		PMD_DRV_LOG(WARNING,
+			    "table: %d data-sz: %016" PRIu64 " page-sz: %u\n",
+			    tbl->type,
+			    (uint64_t)num_entries * tbl->entry_size,
+			    TF_EM_PAGE_SIZE);
+		return -ENOMEM;
+	}
+
+	tbl->num_lvl = max_lvl + 1;
+	tbl->num_data_pages = num_data_pages;
+
+	/* Determine the number of pages needed at each level */
+	page_cnt = tbl->page_cnt;
+	memset(page_cnt, 0, sizeof(tbl->page_cnt));
+	tf_em_size_page_tbls(max_lvl, num_data_pages, TF_EM_PAGE_SIZE,
+				page_cnt);
+
+	PMD_DRV_LOG(INFO, "EEM: Sized page table: %d\n", tbl->type);
+	PMD_DRV_LOG(INFO,
+		    "EEM: lvls: %d sz: %016" PRIu64 " pgs: %016" PRIu64 " l0: %u l1: %u l2: %u\n",
+		    max_lvl + 1,
+		    (uint64_t)num_data_pages * TF_EM_PAGE_SIZE,
+		    num_data_pages,
+		    page_cnt[PT_LVL_0],
+		    page_cnt[PT_LVL_1],
+		    page_cnt[PT_LVL_2]);
+
+	return 0;
+}
+
+/**
+ * Unregisters EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ */
+static void
+tf_em_ctx_unreg(struct tf *tfp,
+		struct tf_tbl_scope_cb *tbl_scope_cb,
+		int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries != 0 && tbl->entry_size != 0) {
+			tf_msg_em_mem_unrgtr(tfp, &tbl->ctx_id);
+			tf_em_free_page_table(tbl);
+		}
+	}
+}
+
+/**
+ * Registers EM Ctx in Firmware
+ *
+ * [in] tfp
+ *   Pointer to a TruFlow handle
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to a table scope control block
+ *
+ * [in] dir
+ *   Receive or transmit direction
+ *
+ * Returns:
+ *   0       - Success
+ *   -ENOMEM - Out of Memory
+ */
+static int
+tf_em_ctx_reg(struct tf *tfp,
+	      struct tf_tbl_scope_cb *tbl_scope_cb,
+	      int dir)
+{
+	struct tf_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
+	struct tf_em_table *tbl;
+	int rc;
+	int i;
+
+	for (i = KEY0_TABLE; i < MAX_TABLE; i++) {
+		tbl = &ctxp->em_tables[i];
+
+		if (tbl->num_entries && tbl->entry_size) {
+			rc = tf_em_size_table(tbl);
+
+			if (rc)
+				goto cleanup;
+
+			rc = tf_em_alloc_page_table(tbl);
+			if (rc)
+				goto cleanup;
+
+			tf_em_setup_page_table(tbl);
+			rc = tf_msg_em_mem_rgtr(tfp,
+						tbl->num_lvl - 1,
+						TF_EM_PAGE_SIZE_ENUM,
+						tbl->l0_dma_addr,
+						&tbl->ctx_id);
+			if (rc)
+				goto cleanup;
+		}
+	}
+	return rc;
+
+cleanup:
+	tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	return rc;
+}
+
+/**
+ * Validates EM number of entries requested
+ *
+ * [in] tbl_scope_cb
+ *   Pointer to table scope control block to be populated
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_em_validate_num_entries(struct tf_tbl_scope_cb *tbl_scope_cb,
+			   struct tf_alloc_tbl_scope_parms *parms)
+{
+	uint32_t cnt;
+
+	if (parms->rx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * ((parms->rx_max_key_sz_in_bits / 8) + 1);
+		uint32_t action_b = ((parms->rx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->rx_mem_size_in_mb *
+					TF_MEGABYTE) / (key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Insufficient memory requested:"
+				    "%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR, "EEM: Invalid number of Tx requested: "
+				    "%u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->rx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->rx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->rx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx flows "
+				    "requested:%u max:%u\n",
+				    parms->rx_num_flows_in_k * TF_KILOBYTE,
+			tbl_scope_cb->em_caps[TF_DIR_RX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		/* must be a power-of-2 supported value
+		 * in the range 32K - 128M
+		 */
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->rx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Rx requested: %u\n",
+				    (parms->rx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	if (parms->tx_mem_size_in_mb != 0) {
+		uint32_t key_b = 2 * (parms->tx_max_key_sz_in_bits / 8 + 1);
+		uint32_t action_b = ((parms->tx_max_action_entry_sz_in_bits / 8)
+				     + 1);
+		uint32_t num_entries = (parms->tx_mem_size_in_mb *
+					(TF_KILOBYTE * TF_KILOBYTE)) /
+			(key_b + action_b);
+
+		if (num_entries < TF_EM_MIN_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Insufficient memory requested:%uMB\n",
+				    parms->rx_mem_size_in_mb);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while (num_entries > cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+
+		parms->tx_num_flows_in_k = cnt / TF_KILOBYTE;
+	} else {
+		if ((parms->tx_num_flows_in_k * TF_KILOBYTE) <
+		    TF_EM_MIN_ENTRIES ||
+		    (parms->tx_num_flows_in_k * TF_KILOBYTE) >
+		    tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx flows "
+				    "requested:%u max:%u\n",
+				    (parms->tx_num_flows_in_k * TF_KILOBYTE),
+			tbl_scope_cb->em_caps[TF_DIR_TX].max_entries_supported);
+			return -EINVAL;
+		}
+
+		cnt = TF_EM_MIN_ENTRIES;
+		while ((parms->tx_num_flows_in_k * TF_KILOBYTE) != cnt &&
+		       cnt <= TF_EM_MAX_ENTRIES)
+			cnt *= 2;
+
+		if (cnt > TF_EM_MAX_ENTRIES) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Invalid number of Tx requested: %u\n",
+		       (parms->tx_num_flows_in_k * TF_KILOBYTE));
+			return -EINVAL;
+		}
+	}
+
+	/* Rx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY0_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].num_entries =
+		parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[KEY1_TABLE].entry_size =
+		parms->rx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].num_entries
+		= parms->rx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_RX].em_tables[EFC_TABLE].num_entries
+		= 0;
+
+	/* Tx */
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY0_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].num_entries =
+		parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[KEY1_TABLE].entry_size =
+		parms->tx_max_key_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].num_entries
+		= parms->tx_num_flows_in_k * TF_KILOBYTE;
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[RECORD_TABLE].entry_size
+		= parms->tx_max_action_entry_sz_in_bits / 8;
+
+	tbl_scope_cb->em_ctx_info[TF_DIR_TX].em_tables[EFC_TABLE].num_entries
+		= 0;
+
+	return 0;
+}
+
+/**
+ * Internal function to set a Table Entry. Supports all internal Table Types
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_set_tbl_entry_internal(struct tf *tfp,
+			  struct tf_set_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Set the entry */
+	rc = tf_msg_set_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Set failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+/**
+ * Internal function to get a Table Entry. Supports all Table Types
+ * except the TF_TBL_TYPE_EXT as that is handled as a table scope.
+ *
+ * [in] tfp
+ *   Pointer to TruFlow handle
+ *
+ * [in] parms
+ *   Pointer to input parameters
+ *
+ * Returns:
+ *   0       - Success
+ *   -EINVAL - Parameter error
+ */
+static int
+tf_get_tbl_entry_internal(struct tf *tfp,
+			  struct tf_get_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Verify that the entry has been previously allocated */
+	id = ba_inuse(session_pool, index);
+	if (id != 1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Invalid or not allocated index, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -EINVAL;
+	}
+
+	/* Get the entry */
+	rc = tf_msg_get_tbl_entry(tfp,
+				  parms->dir,
+				  parms->type,
+				  parms->data_sz_in_bytes,
+				  parms->data,
+				  parms->idx);
+	if (rc) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Get failed, type:%d, rc:%d\n",
+			    parms->dir,
+			    parms->type,
+			    rc);
+	}
+
+	return rc;
+}
+
+#if (TF_SHADOW == 1)
+/**
+ * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is incremente and
+ * returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count incremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,
+			  struct tf_alloc_tbl_entry_parms *parms __rte_unused)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Alloc with search not supported\n",
+		    parms->dir);
+
+
+	return -EOPNOTSUPP;
+}
+
+/**
+ * Free Tbl entry from the Shadow DB. Shadow DB is searched for
+ * the requested entry. If found the ref count is decremente and
+ * new ref_count returned.
+ *
+ * [in] tfs
+ *   Pointer to session
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOENT - Failure, entry not found
+ */
+static int
+tf_free_tbl_entry_shadow(struct tf_session *tfs,
+			 struct tf_free_tbl_entry_parms *parms)
+{
+	PMD_DRV_LOG(ERR,
+		    "dir:%d, Entry Free with search not supported\n",
+		    parms->dir);
+
+	return -EOPNOTSUPP;
+}
+#endif /* TF_SHADOW */
+
+/**
+ * Create External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ * [in] tbl_scope_id
+ *   id of the table scope
+ * [in] num_entries
+ *   number of entries to write
+ * [in] entry_sz_bytes
+ *   size of each entry
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_create_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb,
+			    uint32_t table_scope_id,
+			    uint32_t num_entries,
+			    uint32_t entry_sz_bytes)
+
+{
+	struct tfp_calloc_parms parms;
+	uint32_t i, j;
+	int rc = 0;
+	struct stack *pool = &tbl_scope_cb->ext_pool[dir][TF_EXT_POOL_0];
+
+	parms.nitems = num_entries;
+	parms.size = sizeof(uint32_t);
+	parms.alignment = 0;
+
+	if (tfp_calloc(&parms) != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: external pool failure %s\n",
+			    dir, strerror(-ENOMEM));
+		return -ENOMEM;
+	}
+
+	/* Create empty stack
+	 */
+	rc = stack_init(num_entries, parms.mem_va, pool);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR, "%d: TBL: stack init failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+
+	/* Save the  malloced memory address so that it can
+	 * be freed when the table scope is freed.
+	 */
+	tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0] =
+		(uint32_t *)parms.mem_va;
+
+	/* Fill pool with indexes
+	 */
+	j = num_entries * entry_sz_bytes - 1;
+
+	for (i = 0; i < num_entries; i++) {
+		rc = stack_push(pool, j);
+		if (rc != 0) {
+			PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+				    dir, strerror(-rc));
+			goto cleanup;
+		}
+		j -= entry_sz_bytes;
+	}
+
+	if (!stack_is_full(pool)) {
+		rc = -EINVAL;
+		PMD_DRV_LOG(ERR, "%d TBL: stack failure %s\n",
+			    dir, strerror(-rc));
+		goto cleanup;
+	}
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = table_scope_id;
+
+	return 0;
+cleanup:
+	tfp_free((void *)parms.mem_va);
+	return rc;
+}
+
+/**
+ * Destroy External Tbl pool of memory indexes.
+ *
+ * [in] session
+ *   Pointer to session
+ * [in] dir
+ *   direction
+ * [in] tbl_scope_cb
+ *   pointer to the table scope
+ *
+ */
+static void
+tf_destroy_tbl_pool_external(struct tf_session *session,
+			    enum tf_dir dir,
+			    struct tf_tbl_scope_cb *tbl_scope_cb)
+{
+	uint32_t *ext_pool_mem =
+		tbl_scope_cb->ext_pool_mem[dir][TF_EXT_POOL_0];
+
+	tfp_free(ext_pool_mem);
+
+	/* Set the table scope associated with the pool
+	 */
+	session->ext_pool_2_scope[dir][TF_EXT_POOL_0] = TF_TBL_SCOPE_INVALID;
+}
+
+/**
+ * Allocate External Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry allocated - no search support
+ *  -ENOMEM -EINVAL -EOPNOTSUPP
+ *          - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_external(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	uint32_t index;
+	struct tf_session *tfs;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope not allocated\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	/* Allocate an element
+	 */
+	rc = stack_pop(pool, &index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d\n",
+		   parms->dir,
+		   parms->type);
+		return rc;
+	}
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Allocate Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_alloc_tbl_entry_pool_internal(struct tf *tfp,
+				 struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+	int id;
+	int free_cnt;
+	uint32_t index;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	id = ba_alloc(session_pool);
+	if (id == -1) {
+		free_cnt = ba_free_count(session_pool);
+
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Allocation failed, type:%d, free:%d\n",
+		   parms->dir,
+		   parms->type,
+		   free_cnt);
+		return -ENOMEM;
+	}
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_ADD_BASE,
+			    id,
+			    &index);
+	parms->idx = index;
+	return rc;
+}
+
+/**
+ * Free External Tbl entry to the session pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry freed
+ *
+ * - Failure, entry not successfully freed for these reasons
+ *  -ENOMEM
+ *  -EOPNOTSUPP
+ *  -EINVAL
+ */
+static int
+tf_free_tbl_entry_pool_external(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_session *tfs;
+	uint32_t index;
+	uint32_t tbl_scope_id;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct stack *pool;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Get the pool info from the table scope
+	 */
+	tbl_scope_id = tfs->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+	tbl_scope_cb = tbl_scope_cb_find(tfs, tbl_scope_id);
+
+	if (tbl_scope_cb == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, table scope error\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+	pool = &tbl_scope_cb->ext_pool[parms->dir][TF_EXT_POOL_0];
+
+	index = parms->idx;
+
+	rc = stack_push(pool, index);
+
+	if (rc != 0) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, consistency error, stack full, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+	}
+	return rc;
+}
+
+/**
+ * Free Internal Tbl entry from the Session Pool.
+ *
+ * [in] tfp
+ *   Pointer to Truflow Handle
+ * [in] parms
+ *   Allocation parameters
+ *
+ * Return:
+ *  0       - Success, entry found and ref count decremented
+ *  -ENOMEM - Failure, entry not allocated, out of resources
+ */
+static int
+tf_free_tbl_entry_pool_internal(struct tf *tfp,
+		       struct tf_free_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	int id;
+	struct bitalloc *session_pool;
+	struct tf_session *tfs;
+	uint32_t index;
+
+	/* Check parameters */
+	if (tfp == NULL || parms == NULL) {
+		PMD_DRV_LOG(ERR, "Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	if (parms->type != TF_TBL_TYPE_FULL_ACT_RECORD &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC &&
+	    parms->type != TF_TBL_TYPE_ACT_SP_SMAC_IPV4 &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_8B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_16B &&
+	    parms->type != TF_TBL_TYPE_ACT_ENCAP_64B) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Type not supported, type:%d\n",
+			    parms->dir,
+			    parms->type);
+		return -EOPNOTSUPP;
+	}
+
+	/* Lookup the pool using the table type of the element */
+	rc = tf_rm_lookup_tbl_type_pool(tfs,
+					parms->dir,
+					parms->type,
+					&session_pool);
+	/* Error logging handled by tf_rm_lookup_tbl_type_pool */
+	if (rc)
+		return rc;
+
+	index = parms->idx;
+
+	/* Adjust the returned index/offset as there is no guarantee
+	 * that the start is 0 at time of RM allocation
+	 */
+	tf_rm_convert_index(tfs,
+			    parms->dir,
+			    parms->type,
+			    TF_RM_CONVERT_RM_BASE,
+			    parms->idx,
+			    &index);
+
+	/* Check if element was indeed allocated */
+	id = ba_inuse_free(session_pool, index);
+	if (id == -1) {
+		PMD_DRV_LOG(ERR,
+		   "dir:%d, Element not previously alloc'ed, type:%d, idx:%d\n",
+		   parms->dir,
+		   parms->type,
+		   index);
+		return -ENOMEM;
+	}
+
+	return rc;
+}
+
 /* API defined in tf_tbl.h */
 void
 tf_init_tbl_pool(struct tf_session *session)
@@ -41,3 +1401,436 @@ tf_init_tbl_pool(struct tf_session *session)
 			TF_TBL_SCOPE_INVALID;
 	}
 }
+
+/* API defined in tf_em.h */
+struct tf_tbl_scope_cb *
+tbl_scope_cb_find(struct tf_session *session,
+		  uint32_t tbl_scope_id)
+{
+	int i;
+
+	/* Check that id is valid */
+	i = ba_inuse(session->tbl_scope_pool_rx, tbl_scope_id);
+	if (i < 0)
+		return NULL;
+
+	for (i = 0; i < TF_NUM_TBL_SCOPE; i++) {
+		if (session->tbl_scopes[i].tbl_scope_id == tbl_scope_id)
+			return &session->tbl_scopes[i];
+	}
+
+	return NULL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_eem_tbl_scope_cb(struct tf *tfp,
+			 struct tf_free_tbl_scope_parms *parms)
+{
+	int rc = 0;
+	enum tf_dir  dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	session = (struct tf_session *)(tfp->session->core_data);
+
+	tbl_scope_cb = tbl_scope_cb_find(session,
+					 parms->tbl_scope_id);
+
+	if (tbl_scope_cb == NULL)
+		return -EINVAL;
+
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+
+	/* free table scope locks */
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/* Free associated external pools
+		 */
+		tf_destroy_tbl_pool_external(session,
+					     dir,
+					     tbl_scope_cb);
+		tf_msg_em_op(tfp,
+			     dir,
+			     HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_DISABLE);
+
+		/* free table scope and all associated resources */
+		tf_em_ctx_unreg(tfp, tbl_scope_cb, dir);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_em.h */
+int
+tf_alloc_eem_tbl_scope(struct tf *tfp,
+		       struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+	enum tf_dir dir;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_em_table *em_tables;
+	int index;
+	struct tf_session *session;
+	struct tf_free_tbl_scope_parms free_parms;
+
+	/* check parameters */
+	if (parms == NULL || tfp->session == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	session = (struct tf_session *)tfp->session->core_data;
+
+	/* Get Table Scope control block from the session pool */
+	index = ba_alloc(session->tbl_scope_pool_rx);
+	if (index == -1) {
+		PMD_DRV_LOG(ERR, "EEM: Unable to allocate table scope "
+			    "Control Block\n");
+		return -ENOMEM;
+	}
+
+	tbl_scope_cb = &session->tbl_scopes[index];
+	tbl_scope_cb->index = index;
+	tbl_scope_cb->tbl_scope_id = index;
+	parms->tbl_scope_id = index;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		rc = tf_msg_em_qcaps(tfp,
+				     dir,
+				     &tbl_scope_cb->em_caps[dir]);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"EEM: Unable to query for EEM capability\n");
+			goto cleanup;
+		}
+	}
+
+	/*
+	 * Validate and setup table sizes
+	 */
+	if (tf_em_validate_num_entries(tbl_scope_cb, parms))
+		goto cleanup;
+
+	for (dir = 0; dir < TF_DIR_MAX; dir++) {
+		/*
+		 * Allocate tables and signal configuration to FW
+		 */
+		rc = tf_em_ctx_reg(tfp, tbl_scope_cb, dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to register for EEM ctx\n");
+			goto cleanup;
+		}
+
+		em_tables = tbl_scope_cb->em_ctx_info[dir].em_tables;
+		rc = tf_msg_em_cfg(tfp,
+				   em_tables[KEY0_TABLE].num_entries,
+				   em_tables[KEY0_TABLE].ctx_id,
+				   em_tables[KEY1_TABLE].ctx_id,
+				   em_tables[RECORD_TABLE].ctx_id,
+				   em_tables[EFC_TABLE].ctx_id,
+				   dir);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				"TBL: Unable to configure EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		rc = tf_msg_em_op(tfp,
+				  dir,
+				  HWRM_TF_EXT_EM_OP_INPUT_OP_EXT_EM_ENABLE);
+
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "EEM: Unable to enable EEM in firmware\n");
+			goto cleanup_full;
+		}
+
+		/* Allocate the pool of offsets of the external memory.
+		 * Initially, this is a single fixed size pool for all external
+		 * actions related to a single table scope.
+		 */
+		rc = tf_create_tbl_pool_external(session,
+						 dir,
+						 tbl_scope_cb,
+						 index,
+						 TF_EXT_POOL_ENTRY_CNT,
+						 TF_EXT_POOL_ENTRY_SZ_BYTES);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "%d TBL: Unable to allocate idx pools %s\n",
+				    dir,
+				    strerror(-rc));
+			goto cleanup_full;
+		}
+	}
+
+	return 0;
+
+cleanup_full:
+	free_parms.tbl_scope_id = index;
+	tf_free_eem_tbl_scope_cb(tfp, &free_parms);
+	return -EINVAL;
+
+cleanup:
+	/* Free Table control block */
+	ba_free(session->tbl_scope_pool_rx, tbl_scope_cb->index);
+	return -EINVAL;
+}
+
+/* API defined in tf_core.h */
+int
+tf_set_tbl_entry(struct tf *tfp,
+		 struct tf_set_tbl_entry_parms *parms)
+{
+	int rc = 0;
+	struct tf_tbl_scope_cb *tbl_scope_cb;
+	struct tf_session *session;
+
+	if (tfp == NULL || parms == NULL || parms->data == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		void *base_addr;
+		uint32_t offset = TF_ACT_REC_INDEX_2_OFFSET(parms->idx);
+		uint32_t tbl_scope_id;
+
+		session = (struct tf_session *)(tfp->session->core_data);
+
+		tbl_scope_id =
+			session->ext_pool_2_scope[parms->dir][TF_EXT_POOL_0];
+
+		if (tbl_scope_id == TF_TBL_SCOPE_INVALID)  {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Table scope not allocated\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		/* Get the table scope control block associated with the
+		 * external pool
+		 */
+
+		tbl_scope_cb = tbl_scope_cb_find(session, tbl_scope_id);
+
+		if (tbl_scope_cb == NULL)
+			return -EINVAL;
+
+		/* External table, implicitly the Action table */
+		base_addr = tf_em_get_table_page(tbl_scope_cb,
+						 parms->dir,
+						 offset,
+						 RECORD_TABLE);
+		if (base_addr == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Base address lookup failed\n",
+				    parms->dir);
+			return -EINVAL;
+		}
+
+		offset %= TF_EM_PAGE_SIZE;
+		rte_memcpy((char *)base_addr + offset,
+			   parms->data,
+			   parms->data_sz_in_bytes);
+	} else {
+		/* Internal table type processing */
+		rc = tf_set_tbl_entry_internal(tfp, parms);
+		if (rc) {
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Set failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+		}
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_get_tbl_entry(struct tf *tfp,
+		 struct tf_get_tbl_entry_parms *parms)
+{
+	int rc = 0;
+
+	if (tfp == NULL || parms == NULL)
+		return -EINVAL;
+
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, External table type not supported\n",
+			    parms->dir);
+
+		rc = -EOPNOTSUPP;
+	} else {
+		/* Internal table type processing */
+		rc = tf_get_tbl_entry_internal(tfp, parms);
+		if (rc)
+			PMD_DRV_LOG(ERR,
+				    "dir:%d, Get failed, type:%d, rc:%d\n",
+				    parms->dir,
+				    parms->type,
+				    rc);
+	}
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_scope(struct tf *tfp,
+		   struct tf_alloc_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	rc = tf_alloc_eem_tbl_scope(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_scope(struct tf *tfp,
+		  struct tf_free_tbl_scope_parms *parms)
+{
+	int rc;
+
+	/* check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+
+	/* free table scope and all associated resources */
+	rc = tf_free_eem_tbl_scope_cb(tfp, parms);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_alloc_tbl_entry(struct tf *tfp,
+		   struct tf_alloc_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow copy support for external tables, allocate and return
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_alloc_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_alloc_tbl_entry_shadow(tfs, parms);
+		/* Entry found and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_alloc_tbl_entry_pool_internal(tfp, parms);
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+
+	return rc;
+}
+
+/* API defined in tf_core.h */
+int
+tf_free_tbl_entry(struct tf *tfp,
+		  struct tf_free_tbl_entry_parms *parms)
+{
+	int rc;
+#if (TF_SHADOW == 1)
+	struct tf_session *tfs;
+#endif /* TF_SHADOW */
+
+	/* Check parameters */
+	if (parms == NULL || tfp == NULL) {
+		PMD_DRV_LOG(ERR, "TBL: Invalid parameters\n");
+		return -EINVAL;
+	}
+	/*
+	 * No shadow of external tables so just free the entry
+	 */
+	if (parms->type == TF_TBL_TYPE_EXT) {
+		rc = tf_free_tbl_entry_pool_external(tfp, parms);
+		return rc;
+	}
+
+#if (TF_SHADOW == 1)
+	if (tfp->session == NULL || tfp->session->core_data == NULL) {
+		PMD_DRV_LOG(ERR,
+			    "dir:%d, Session info invalid\n",
+			    parms->dir);
+		return -EINVAL;
+	}
+
+	tfs = (struct tf_session *)(tfp->session->core_data);
+
+	/* Search the Shadow DB for requested element. If not found go
+	 * allocate one from the Session Pool
+	 */
+	if (parms->search_enable && tfs->shadow_copy) {
+		rc = tf_free_tbl_entry_shadow(tfs, parms);
+		/* Entry free'ed and parms populated with return data */
+		if (rc == 0)
+			return rc;
+	}
+#endif /* TF_SHADOW */
+
+	rc = tf_free_tbl_entry_pool_internal(tfp, parms);
+
+	if (rc)
+		PMD_DRV_LOG(ERR, "dir:%d, Alloc failed, rc:%d\n",
+			    parms->dir,
+			    rc);
+	return rc;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index 5a5e72f..cb7ce9d 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -7,6 +7,7 @@
 #define _TF_TBL_H_
 
 #include <stdint.h>
+#include "stack.h"
 
 enum tf_pg_tbl_lvl {
 	PT_LVL_0,
@@ -15,6 +16,48 @@ enum tf_pg_tbl_lvl {
 	PT_LVL_MAX
 };
 
+enum tf_em_table_type {
+	KEY0_TABLE,
+	KEY1_TABLE,
+	RECORD_TABLE,
+	EFC_TABLE,
+	MAX_TABLE
+};
+
+struct tf_em_page_tbl {
+	uint32_t	pg_count;
+	uint32_t	pg_size;
+	void		**pg_va_tbl;
+	uint64_t	*pg_pa_tbl;
+};
+
+struct tf_em_table {
+	int				type;
+	uint32_t			num_entries;
+	uint16_t			ctx_id;
+	uint32_t			entry_size;
+	int				num_lvl;
+	uint32_t			page_cnt[PT_LVL_MAX];
+	uint64_t			num_data_pages;
+	void				*l0_addr;
+	uint64_t			l0_dma_addr;
+	struct tf_em_page_tbl pg_tbl[PT_LVL_MAX];
+};
+
+struct tf_em_ctx_mem_info {
+	struct tf_em_table		em_tables[MAX_TABLE];
+};
+
+/** table scope control block content */
+struct tf_em_caps {
+	uint32_t flags;
+	uint32_t supported;
+	uint32_t max_entries_supported;
+	uint16_t key_entry_size;
+	uint16_t record_entry_size;
+	uint16_t efc_entry_size;
+};
+
 /** Invalid table scope id */
 #define TF_TBL_SCOPE_INVALID 0xffffffff
 
@@ -27,9 +70,49 @@ enum tf_pg_tbl_lvl {
 struct tf_tbl_scope_cb {
 	uint32_t tbl_scope_id;
 	int index;
+	struct tf_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];
+	struct tf_em_caps          em_caps[TF_DIR_MAX];
+	struct stack               ext_pool[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 	uint32_t              *ext_pool_mem[TF_DIR_MAX][TF_EXT_POOL_CNT_MAX];
 };
 
+/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.
+ * Round-down other page sizes to the lower hardware page size supported.
+ */
+#define PAGE_SHIFT 22 /** 2M */
+
+#if (PAGE_SHIFT < 12)				/** < 4K >> 4K */
+#define TF_EM_PAGE_SHIFT 12
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K
+#elif (PAGE_SHIFT <= 13)			/** 4K, 8K */
+#define TF_EM_PAGE_SHIFT 13
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K
+#elif (PAGE_SHIFT < 16)				/** 16K, 32K >> 8K */
+#define TF_EM_PAGE_SHIFT 15
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_32K
+#elif (PAGE_SHIFT <= 17)			/** 64K, 128K >> 64K */
+#define TF_EM_PAGE_SHIFT 16
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K
+#elif (PAGE_SHIFT <= 19)			/** 256K, 512K >> 256K */
+#define TF_EM_PAGE_SHIFT 18
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K
+#elif (PAGE_SHIFT <= 21)			/** 1M */
+#define TF_EM_PAGE_SHIFT 20
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M
+#elif (PAGE_SHIFT <= 22)			/** 2M, 4M */
+#define TF_EM_PAGE_SHIFT 21
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M
+#elif (PAGE_SHIFT <= 29)			/** 8M ... 512M >> 4M */
+#define TF_EM_PAGE_SHIFT 22
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M
+#else						/** >= 1G >> 1G */
+#define TF_EM_PAGE_SHIFT	30
+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G
+#endif
+
+#define TF_EM_PAGE_SIZE	(1 << TF_EM_PAGE_SHIFT)
+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)
+
 /**
  * Initialize table pool structure to indicate
  * no table scope has been associated with the
-- 
2.7.4


  parent reply	other threads:[~2020-03-17 15:42 UTC|newest]

Thread overview: 154+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-17 15:37 [dpdk-dev] [PATCH 00/33] add support for host based flow table management Venkat Duvvuru
2020-03-17 15:37 ` [dpdk-dev] [PATCH 01/33] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 02/33] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 03/33] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 04/33] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 05/33] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 06/33] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 07/33] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 08/33] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 09/33] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 10/33] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 11/33] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-03-17 15:38 ` Venkat Duvvuru [this message]
2020-03-17 15:38 ` [dpdk-dev] [PATCH 13/33] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 14/33] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 15/33] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 16/33] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 17/33] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 18/33] net/bnxt: add support to process action tables Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 19/33] net/bnxt: add support to process key tables Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 20/33] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 21/33] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 22/33] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 23/33] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 24/33] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 25/33] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 26/33] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 27/33] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 28/33] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 29/33] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 30/33] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 31/33] net/bnxt: disable vector mode when BNXT TRUFLOW is enabled Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 32/33] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-03-17 15:38 ` [dpdk-dev] [PATCH 33/33] config: introduce BNXT TRUFLOW config flag Venkat Duvvuru
2020-04-13 19:39 ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-04-13 19:39   ` [dpdk-dev] [PATCH v2 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-04-13 19:40   ` [dpdk-dev] [PATCH v2 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
2020-04-13 21:35   ` [dpdk-dev] [PATCH v2 00/34] add support for host based flow table management Thomas Monjalon
2020-04-15  8:56     ` Venkat Duvvuru
2020-04-14  8:12   ` [dpdk-dev] [PATCH v3 " Venkat Duvvuru
2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-04-14  8:12     ` [dpdk-dev] [PATCH v3 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-04-14  8:13     ` [dpdk-dev] [PATCH v3 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
2020-04-15  8:18     ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 01/34] net/bnxt: add updated dpdk hsi structure Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 02/34] net/bnxt: update hwrm prep to use ptr Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 03/34] net/bnxt: add truflow message handlers Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 04/34] net/bnxt: add initial tf core session open Venkat Duvvuru
2020-04-16 17:39         ` Ferruh Yigit
2020-04-16 17:47           ` Ajit Khaparde
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support Venkat Duvvuru
2020-04-16 17:39         ` Ferruh Yigit
2020-04-16 17:48           ` Ajit Khaparde
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 06/34] net/bnxt: add tf core session sram functions Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 07/34] net/bnxt: add initial tf core resource mgmt support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 08/34] net/bnxt: add resource manager functionality Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 09/34] net/bnxt: add tf core identifier support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 10/34] net/bnxt: add tf core TCAM support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 11/34] net/bnxt: add tf core table scope support Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 12/34] net/bnxt: add EM/EEM functionality Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 13/34] net/bnxt: fetch SVIF information from the firmware Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 14/34] net/bnxt: fetch vnic info from DPDK port Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 15/34] net/bnxt: add devargs parameter for host memory based TRUFLOW feature Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 16/34] net/bnxt: add support for ULP session manager init Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 17/34] net/bnxt: add support for ULP session manager cleanup Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 18/34] net/bnxt: add helper functions for blob/regfile ops Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 19/34] net/bnxt: add support to process action tables Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 20/34] net/bnxt: add support to process key tables Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 21/34] net/bnxt: add support to free key and action tables Venkat Duvvuru
2020-04-15  8:18       ` [dpdk-dev] [PATCH v4 22/34] net/bnxt: add support to alloc and program key and act tbls Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 23/34] net/bnxt: match rte flow items with flow template patterns Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 24/34] net/bnxt: match rte flow actions with flow template actions Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 25/34] net/bnxt: add support for rte flow item parsing Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 26/34] net/bnxt: add support for rte flow action parsing Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 27/34] net/bnxt: add support for rte flow create driver hook Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 28/34] net/bnxt: add support for rte flow validate " Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 29/34] net/bnxt: add support for rte flow destroy " Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 30/34] net/bnxt: add support for rte flow flush " Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 31/34] net/bnxt: register tf rte flow ops Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 32/34] net/bnxt: disable vector mode when host based TRUFLOW is enabled Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 33/34] net/bnxt: add support for injecting mark into packet’s mbuf Venkat Duvvuru
2020-04-15  8:19       ` [dpdk-dev] [PATCH v4 34/34] net/bnxt: enable meson build on truflow code Venkat Duvvuru
2020-04-22 21:27         ` Thomas Monjalon
2020-04-15 15:29       ` [dpdk-dev] [PATCH v4 00/34] add support for host based flow table management Ajit Khaparde
2020-04-16 16:23       ` Ferruh Yigit
2020-04-16 16:38         ` Ajit Khaparde
2020-04-16 17:40       ` Ferruh Yigit
2020-04-16 17:51         ` Ajit Khaparde
2020-04-17  8:37           ` Ferruh Yigit
2020-04-17 11:03             ` Ferruh Yigit
2020-04-17 16:14               ` Ajit Khaparde

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1584459511-5353-13-git-send-email-venkatkumar.duvvuru@broadcom.com \
    --to=venkatkumar.duvvuru@broadcom.com \
    --cc=dev@dpdk.org \
    --cc=peter.spreadborough@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).