From: Ajit Khaparde <ajit.khaparde@broadcom.com>
To: dev@dpdk.org
Cc: ferruh.yigit@intel.com, Mike Baucom <michael.baucom@broadcom.com>,
Farah Smith <farah.smith@broadcom.com>
Subject: [dpdk-dev] [PATCH v3 12/22] net/bnxt: add shadow table capability with search
Date: Thu, 23 Jul 2020 22:32:25 -0700 [thread overview]
Message-ID: <20200724053235.71069-13-ajit.khaparde@broadcom.com> (raw)
In-Reply-To: <20200724053235.71069-1-ajit.khaparde@broadcom.com>
From: Mike Baucom <michael.baucom@broadcom.com>
- Added Index Table shadow tables for searching
- Added Search API to allow reuse of Table entries
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
---
drivers/net/bnxt/tf_core/tf_core.c | 66 +-
drivers/net/bnxt/tf_core/tf_core.h | 79 ++-
drivers/net/bnxt/tf_core/tf_device_p4.c | 2 +-
drivers/net/bnxt/tf_core/tf_shadow_tbl.c | 768 +++++++++++++++++++++-
drivers/net/bnxt/tf_core/tf_shadow_tbl.h | 124 ++--
drivers/net/bnxt/tf_core/tf_shadow_tcam.c | 6 +
drivers/net/bnxt/tf_core/tf_tbl.c | 246 ++++++-
drivers/net/bnxt/tf_core/tf_tbl.h | 22 +-
drivers/net/bnxt/tf_core/tf_tcam.h | 2 +-
9 files changed, 1211 insertions(+), 104 deletions(-)
diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c
index ca3280b6b..0dbde1de2 100644
--- a/drivers/net/bnxt/tf_core/tf_core.c
+++ b/drivers/net/bnxt/tf_core/tf_core.c
@@ -75,7 +75,6 @@ tf_open_session(struct tf *tfp,
/* Session vs session client is decided in
* tf_session_open_session()
*/
- printf("TF_OPEN, %s\n", parms->ctrl_chan_name);
rc = tf_session_open_session(tfp, &oparms);
/* Logging handled by tf_session_open_session */
if (rc)
@@ -953,6 +952,71 @@ tf_alloc_tbl_entry(struct tf *tfp,
return 0;
}
+int
+tf_search_tbl_entry(struct tf *tfp,
+ struct tf_search_tbl_entry_parms *parms)
+{
+ int rc;
+ struct tf_session *tfs;
+ struct tf_dev_info *dev;
+ struct tf_tbl_alloc_search_parms sparms;
+
+ TF_CHECK_PARMS2(tfp, parms);
+
+ /* Retrieve the session information */
+ rc = tf_session_get_session(tfp, &tfs);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "%s: Failed to lookup session, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ /* Retrieve the device information */
+ rc = tf_session_get_device(tfs, &dev);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "%s: Failed to lookup device, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ if (dev->ops->tf_dev_alloc_search_tbl == NULL) {
+ rc = -EOPNOTSUPP;
+ TFP_DRV_LOG(ERR,
+ "%s: Operation not supported, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ memset(&sparms, 0, sizeof(struct tf_tbl_alloc_search_parms));
+ sparms.dir = parms->dir;
+ sparms.type = parms->type;
+ sparms.result = parms->result;
+ sparms.result_sz_in_bytes = parms->result_sz_in_bytes;
+ sparms.alloc = parms->alloc;
+ sparms.tbl_scope_id = parms->tbl_scope_id;
+ rc = dev->ops->tf_dev_alloc_search_tbl(tfp, &sparms);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "%s: TBL allocation failed, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ /* Return the outputs from the search */
+ parms->hit = sparms.hit;
+ parms->search_status = sparms.search_status;
+ parms->ref_cnt = sparms.ref_cnt;
+ parms->idx = sparms.idx;
+
+ return 0;
+}
+
int
tf_free_tbl_entry(struct tf *tfp,
struct tf_free_tbl_entry_parms *parms)
diff --git a/drivers/net/bnxt/tf_core/tf_core.h b/drivers/net/bnxt/tf_core/tf_core.h
index 349a1f1a7..db1093515 100644
--- a/drivers/net/bnxt/tf_core/tf_core.h
+++ b/drivers/net/bnxt/tf_core/tf_core.h
@@ -291,9 +291,9 @@ enum tf_tcam_tbl_type {
};
/**
- * TCAM SEARCH STATUS
+ * SEARCH STATUS
*/
-enum tf_tcam_search_status {
+enum tf_search_status {
/** The entry was not found, but an idx was allocated if requested. */
MISS,
/** The entry was found, and the result/idx are valid */
@@ -1011,7 +1011,7 @@ struct tf_search_tcam_entry_parms {
/**
* [out] Search result status (hit, miss, reject)
*/
- enum tf_tcam_search_status search_status;
+ enum tf_search_status search_status;
/**
* [out] Current refcnt after allocation
*/
@@ -1285,6 +1285,79 @@ int tf_free_tcam_entry(struct tf *tfp,
* @ref tf_bulk_get_tbl_entry
*/
+/**
+ * tf_alloc_tbl_entry parameter definition
+ */
+struct tf_search_tbl_entry_parms {
+ /**
+ * [in] Receive or transmit direction
+ */
+ enum tf_dir dir;
+ /**
+ * [in] Type of the allocation
+ */
+ enum tf_tbl_type type;
+ /**
+ * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)
+ */
+ uint32_t tbl_scope_id;
+ /**
+ * [in] Result data to search for
+ */
+ uint8_t *result;
+ /**
+ * [in] Result data size in bytes
+ */
+ uint16_t result_sz_in_bytes;
+ /**
+ * [in] Allocate on miss.
+ */
+ uint8_t alloc;
+ /**
+ * [out] Set if matching entry found
+ */
+ uint8_t hit;
+ /**
+ * [out] Search result status (hit, miss, reject)
+ */
+ enum tf_search_status search_status;
+ /**
+ * [out] Current ref count after allocation
+ */
+ uint16_t ref_cnt;
+ /**
+ * [out] Idx of allocated entry or found entry
+ */
+ uint32_t idx;
+};
+
+/**
+ * search Table Entry (experimental)
+ *
+ * This function searches the shadow copy of an index table for a matching
+ * entry. The result data must match for hit to be set. Only TruFlow core
+ * data is accessed. If shadow_copy is not enabled, an error is returned.
+ *
+ * Implementation:
+ *
+ * A hash is performed on the result data and mappe3d to a shadow copy entry
+ * where the result is populated. If the result matches the entry, hit is set,
+ * ref_cnt is incremented (if alloc), and the search status indicates what
+ * action the caller can take regarding setting the entry.
+ *
+ * search status should be used as follows:
+ * - On MISS, the caller should set the result into the returned index.
+ *
+ * - On REJECT, the caller should reject the flow since there are no resources.
+ *
+ * - On Hit, the matching index is returned to the caller. Additionally, the
+ * ref_cnt is updated.
+ *
+ * Also returns success or failure code.
+ */
+int tf_search_tbl_entry(struct tf *tfp,
+ struct tf_search_tbl_entry_parms *parms);
+
/**
* tf_alloc_tbl_entry parameter definition
*/
diff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c
index afb60989e..fe8dec3af 100644
--- a/drivers/net/bnxt/tf_core/tf_device_p4.c
+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c
@@ -126,7 +126,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {
.tf_dev_alloc_ext_tbl = tf_tbl_ext_alloc,
.tf_dev_free_tbl = tf_tbl_free,
.tf_dev_free_ext_tbl = tf_tbl_ext_free,
- .tf_dev_alloc_search_tbl = NULL,
+ .tf_dev_alloc_search_tbl = tf_tbl_alloc_search,
.tf_dev_set_tbl = tf_tbl_set,
.tf_dev_set_ext_tbl = tf_tbl_ext_common_set,
.tf_dev_get_tbl = tf_tbl_get,
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
index 8f2b6de70..019a26eba 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.c
@@ -3,61 +3,785 @@
* All rights reserved.
*/
-#include <rte_common.h>
-
+#include "tf_common.h"
+#include "tf_util.h"
+#include "tfp.h"
+#include "tf_core.h"
#include "tf_shadow_tbl.h"
+#include "tf_hash.h"
/**
- * Shadow table DB element
+ * The implementation includes 3 tables per table table type.
+ * - hash table
+ * - sized so that a minimum of 4 slots per shadow entry are available to
+ * minimize the likelihood of collisions.
+ * - shadow key table
+ * - sized to the number of entries requested and is directly indexed
+ * - the index is zero based and is the table index - the base address
+ * - the data associated with the entry is stored in the key table.
+ * - The stored key is actually the data associated with the entry.
+ * - shadow result table
+ * - the result table is stored separately since it only needs to be accessed
+ * when the key matches.
+ * - the result has a back pointer to the hash table via the hb handle. The
+ * hb handle is a 32 bit represention of the hash with a valid bit, bucket
+ * element index, and the hash index. It is necessary to store the hb handle
+ * with the result since subsequent removes only provide the table index.
+ *
+ * - Max entries is limited in the current implementation since bit 15 is the
+ * valid bit in the hash table.
+ * - A 16bit hash is calculated and masked based on the number of entries
+ * - 64b wide bucket is used and broken into 4x16bit elements.
+ * This decision is based on quicker bucket scanning to determine if any
+ * elements are in use.
+ * - bit 15 of each bucket element is the valid, this is done to prevent having
+ * to read the larger key/result data for determining VALID. It also aids
+ * in the more efficient scanning of the bucket for slot usage.
*/
-struct tf_shadow_tbl_element {
- /**
- * Hash table
- */
- void *hash;
- /**
- * Reference count, array of number of table type entries
- */
- uint16_t *ref_count;
+/*
+ * The maximum number of shadow entries supported. The value also doubles as
+ * the maximum number of hash buckets. There are only 15 bits of data per
+ * bucket to point to the shadow tables.
+ */
+#define TF_SHADOW_ENTRIES_MAX (1 << 15)
+
+/* The number of elements(BE) per hash bucket (HB) */
+#define TF_SHADOW_HB_NUM_ELEM (4)
+#define TF_SHADOW_BE_VALID (1 << 15)
+#define TF_SHADOW_BE_IS_VALID(be) (((be) & TF_SHADOW_BE_VALID) != 0)
+
+/**
+ * The hash bucket handle is 32b
+ * - bit 31, the Valid bit
+ * - bit 29-30, the element
+ * - bits 0-15, the hash idx (is masked based on the allocated size)
+ */
+#define TF_SHADOW_HB_HANDLE_IS_VALID(hndl) (((hndl) & (1 << 31)) != 0)
+#define TF_SHADOW_HB_HANDLE_CREATE(idx, be) ((1 << 31) | \
+ ((be) << 29) | (idx))
+
+#define TF_SHADOW_HB_HANDLE_BE_GET(hdl) (((hdl) >> 29) & \
+ (TF_SHADOW_HB_NUM_ELEM - 1))
+
+#define TF_SHADOW_HB_HANDLE_HASH_GET(ctxt, hdl)((hdl) & \
+ (ctxt)->hash_ctxt.hid_mask)
+
+/**
+ * The idx provided by the caller is within a region, so currently the base is
+ * either added or subtracted from the idx to ensure it can be used as a
+ * compressed index
+ */
+
+/* Convert the table index to a shadow index */
+#define TF_SHADOW_IDX_TO_SHIDX(ctxt, idx) ((idx) - \
+ (ctxt)->shadow_ctxt.base_addr)
+
+/* Convert the shadow index to a tbl index */
+#define TF_SHADOW_SHIDX_TO_IDX(ctxt, idx) ((idx) + \
+ (ctxt)->shadow_ctxt.base_addr)
+
+/* Simple helper masks for clearing en element from the bucket */
+#define TF_SHADOW_BE0_MASK_CLEAR(hb) ((hb) & 0xffffffffffff0000ull)
+#define TF_SHADOW_BE1_MASK_CLEAR(hb) ((hb) & 0xffffffff0000ffffull)
+#define TF_SHADOW_BE2_MASK_CLEAR(hb) ((hb) & 0xffff0000ffffffffull)
+#define TF_SHADOW_BE3_MASK_CLEAR(hb) ((hb) & 0x0000ffffffffffffull)
+
+/**
+ * This should be coming from external, but for now it is assumed that no key
+ * is greater than 512 bits (64B). This makes allocation of the key table
+ * easier without having to allocate on the fly.
+ */
+#define TF_SHADOW_MAX_KEY_SZ 64
+
+/*
+ * Local only defines for the internal data.
+ */
+
+/**
+ * tf_shadow_tbl_shadow_key_entry is the key entry of the key table.
+ * The key stored in the table is the result data of the index table.
+ */
+struct tf_shadow_tbl_shadow_key_entry {
+ uint8_t key[TF_SHADOW_MAX_KEY_SZ];
+};
+
+/**
+ * tf_shadow_tbl_shadow_result_entry is the result table entry.
+ * The result table writes are broken into two phases:
+ * - The search phase, which stores the hb_handle and key size and
+ * - The set phase, which writes the refcnt
+ */
+struct tf_shadow_tbl_shadow_result_entry {
+ uint16_t key_size;
+ uint32_t refcnt;
+ uint32_t hb_handle;
+};
+
+/**
+ * tf_shadow_tbl_shadow_ctxt holds all information for accessing the key and
+ * result tables.
+ */
+struct tf_shadow_tbl_shadow_ctxt {
+ struct tf_shadow_tbl_shadow_key_entry *sh_key_tbl;
+ struct tf_shadow_tbl_shadow_result_entry *sh_res_tbl;
+ uint32_t base_addr;
+ uint16_t num_entries;
+ uint16_t alloc_idx;
+};
+
+/**
+ * tf_shadow_tbl_hash_ctxt holds all information related to accessing the hash
+ * table.
+ */
+struct tf_shadow_tbl_hash_ctxt {
+ uint64_t *hashtbl;
+ uint16_t hid_mask;
+ uint16_t hash_entries;
};
/**
- * Shadow table DB definition
+ * tf_shadow_tbl_ctxt holds the hash and shadow tables for the current shadow
+ * table db. This structure is per table table type as each table table has
+ * it's own shadow and hash table.
+ */
+struct tf_shadow_tbl_ctxt {
+ struct tf_shadow_tbl_shadow_ctxt shadow_ctxt;
+ struct tf_shadow_tbl_hash_ctxt hash_ctxt;
+};
+
+/**
+ * tf_shadow_tbl_db is the allocated db structure returned as an opaque
+ * void * pointer to the caller during create db. It holds the pointers for
+ * each table associated with the db.
*/
struct tf_shadow_tbl_db {
- /**
- * The DB consists of an array of elements
- */
- struct tf_shadow_tbl_element *db;
+ /* Each context holds the shadow and hash table information */
+ struct tf_shadow_tbl_ctxt *ctxt[TF_TBL_TYPE_MAX];
};
+/**
+ * Simple routine that decides what table types can be searchable.
+ *
+ */
+static int tf_shadow_tbl_is_searchable(enum tf_tbl_type type)
+{
+ int rc = 0;
+
+ switch (type) {
+ case TF_TBL_TYPE_ACT_ENCAP_8B:
+ case TF_TBL_TYPE_ACT_ENCAP_16B:
+ case TF_TBL_TYPE_ACT_ENCAP_32B:
+ case TF_TBL_TYPE_ACT_ENCAP_64B:
+ case TF_TBL_TYPE_ACT_SP_SMAC:
+ case TF_TBL_TYPE_ACT_SP_SMAC_IPV4:
+ case TF_TBL_TYPE_ACT_SP_SMAC_IPV6:
+ case TF_TBL_TYPE_ACT_MODIFY_IPV4:
+ case TF_TBL_TYPE_ACT_MODIFY_SPORT:
+ case TF_TBL_TYPE_ACT_MODIFY_DPORT:
+ rc = 1;
+ break;
+ default:
+ rc = 0;
+ break;
+ };
+
+ return rc;
+}
+
+/**
+ * Returns the number of entries in the contexts shadow table.
+ */
+static inline uint16_t
+tf_shadow_tbl_sh_num_entries_get(struct tf_shadow_tbl_ctxt *ctxt)
+{
+ return ctxt->shadow_ctxt.num_entries;
+}
+
+/**
+ * Compare the give key with the key in the shadow table.
+ *
+ * Returns 0 if the keys match
+ */
+static int
+tf_shadow_tbl_key_cmp(struct tf_shadow_tbl_ctxt *ctxt,
+ uint8_t *key,
+ uint16_t sh_idx,
+ uint16_t size)
+{
+ if (size != ctxt->shadow_ctxt.sh_res_tbl[sh_idx].key_size ||
+ sh_idx >= tf_shadow_tbl_sh_num_entries_get(ctxt) || !key)
+ return -1;
+
+ return memcmp(key, ctxt->shadow_ctxt.sh_key_tbl[sh_idx].key, size);
+}
+
+/**
+ * Free the memory associated with the context.
+ */
+static void
+tf_shadow_tbl_ctxt_delete(struct tf_shadow_tbl_ctxt *ctxt)
+{
+ if (!ctxt)
+ return;
+
+ tfp_free(ctxt->hash_ctxt.hashtbl);
+ tfp_free(ctxt->shadow_ctxt.sh_key_tbl);
+ tfp_free(ctxt->shadow_ctxt.sh_res_tbl);
+}
+
+/**
+ * The TF Shadow TBL context is per TBL and holds all information relating to
+ * managing the shadow and search capability. This routine allocated data that
+ * needs to be deallocated by the tf_shadow_tbl_ctxt_delete prior when deleting
+ * the shadow db.
+ */
+static int
+tf_shadow_tbl_ctxt_create(struct tf_shadow_tbl_ctxt *ctxt,
+ uint16_t num_entries,
+ uint16_t base_addr)
+{
+ struct tfp_calloc_parms cparms;
+ uint16_t hash_size = 1;
+ uint16_t hash_mask;
+ int rc;
+
+ /* Hash table is a power of two that holds the number of entries */
+ if (num_entries > TF_SHADOW_ENTRIES_MAX) {
+ TFP_DRV_LOG(ERR, "Too many entries for shadow %d > %d\n",
+ num_entries,
+ TF_SHADOW_ENTRIES_MAX);
+ return -ENOMEM;
+ }
+
+ while (hash_size < num_entries)
+ hash_size = hash_size << 1;
+
+ hash_mask = hash_size - 1;
+
+ /* Allocate the hash table */
+ cparms.nitems = hash_size;
+ cparms.size = sizeof(uint64_t);
+ cparms.alignment = 0;
+ rc = tfp_calloc(&cparms);
+ if (rc)
+ goto error;
+ ctxt->hash_ctxt.hashtbl = cparms.mem_va;
+ ctxt->hash_ctxt.hid_mask = hash_mask;
+ ctxt->hash_ctxt.hash_entries = hash_size;
+
+ /* allocate the shadow tables */
+ /* allocate the shadow key table */
+ cparms.nitems = num_entries;
+ cparms.size = sizeof(struct tf_shadow_tbl_shadow_key_entry);
+ cparms.alignment = 0;
+ rc = tfp_calloc(&cparms);
+ if (rc)
+ goto error;
+ ctxt->shadow_ctxt.sh_key_tbl = cparms.mem_va;
+
+ /* allocate the shadow result table */
+ cparms.nitems = num_entries;
+ cparms.size = sizeof(struct tf_shadow_tbl_shadow_result_entry);
+ cparms.alignment = 0;
+ rc = tfp_calloc(&cparms);
+ if (rc)
+ goto error;
+ ctxt->shadow_ctxt.sh_res_tbl = cparms.mem_va;
+
+ ctxt->shadow_ctxt.num_entries = num_entries;
+ ctxt->shadow_ctxt.base_addr = base_addr;
+
+ return 0;
+error:
+ tf_shadow_tbl_ctxt_delete(ctxt);
+
+ return -ENOMEM;
+}
+
+/**
+ * Get a shadow table context given the db and the table type
+ */
+static struct tf_shadow_tbl_ctxt *
+tf_shadow_tbl_ctxt_get(struct tf_shadow_tbl_db *shadow_db,
+ enum tf_tbl_type type)
+{
+ if (type >= TF_TBL_TYPE_MAX ||
+ !shadow_db ||
+ !shadow_db->ctxt[type])
+ return NULL;
+
+ return shadow_db->ctxt[type];
+}
+
+/**
+ * Sets the hash entry into the table given the table context, hash bucket
+ * handle, and shadow index.
+ */
+static inline int
+tf_shadow_tbl_set_hash_entry(struct tf_shadow_tbl_ctxt *ctxt,
+ uint32_t hb_handle,
+ uint16_t sh_idx)
+{
+ uint16_t hid = TF_SHADOW_HB_HANDLE_HASH_GET(ctxt, hb_handle);
+ uint16_t be = TF_SHADOW_HB_HANDLE_BE_GET(hb_handle);
+ uint64_t entry = sh_idx | TF_SHADOW_BE_VALID;
+
+ if (hid >= ctxt->hash_ctxt.hash_entries)
+ return -EINVAL;
+
+ ctxt->hash_ctxt.hashtbl[hid] |= entry << (be * 16);
+ return 0;
+}
+
+/**
+ * Clears the hash entry given the TBL context and hash bucket handle.
+ */
+static inline void
+tf_shadow_tbl_clear_hash_entry(struct tf_shadow_tbl_ctxt *ctxt,
+ uint32_t hb_handle)
+{
+ uint16_t hid, be;
+ uint64_t *bucket;
+
+ if (!TF_SHADOW_HB_HANDLE_IS_VALID(hb_handle))
+ return;
+
+ hid = TF_SHADOW_HB_HANDLE_HASH_GET(ctxt, hb_handle);
+ be = TF_SHADOW_HB_HANDLE_BE_GET(hb_handle);
+ bucket = &ctxt->hash_ctxt.hashtbl[hid];
+
+ switch (be) {
+ case 0:
+ *bucket = TF_SHADOW_BE0_MASK_CLEAR(*bucket);
+ break;
+ case 1:
+ *bucket = TF_SHADOW_BE1_MASK_CLEAR(*bucket);
+ break;
+ case 2:
+ *bucket = TF_SHADOW_BE2_MASK_CLEAR(*bucket);
+ break;
+ case 3:
+ *bucket = TF_SHADOW_BE2_MASK_CLEAR(*bucket);
+ break;
+ default:
+ /*
+ * Since the BE_GET masks non-inclusive bits, this will not
+ * happen.
+ */
+ break;
+ }
+}
+
+/**
+ * Clears the shadow key and result entries given the table context and
+ * shadow index.
+ */
+static void
+tf_shadow_tbl_clear_sh_entry(struct tf_shadow_tbl_ctxt *ctxt,
+ uint16_t sh_idx)
+{
+ struct tf_shadow_tbl_shadow_key_entry *sk_entry;
+ struct tf_shadow_tbl_shadow_result_entry *sr_entry;
+
+ if (sh_idx >= tf_shadow_tbl_sh_num_entries_get(ctxt))
+ return;
+
+ sk_entry = &ctxt->shadow_ctxt.sh_key_tbl[sh_idx];
+ sr_entry = &ctxt->shadow_ctxt.sh_res_tbl[sh_idx];
+
+ /*
+ * memset key/result to zero for now, possibly leave the data alone
+ * in the future and rely on the valid bit in the hash table.
+ */
+ memset(sk_entry, 0, sizeof(struct tf_shadow_tbl_shadow_key_entry));
+ memset(sr_entry, 0, sizeof(struct tf_shadow_tbl_shadow_result_entry));
+}
+
+/**
+ * Binds the allocated tbl index with the hash and shadow tables.
+ * The entry will be incomplete until the set has happened with the result
+ * data.
+ */
int
-tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms __rte_unused)
+tf_shadow_tbl_bind_index(struct tf_shadow_tbl_bind_index_parms *parms)
{
+ int rc;
+ uint16_t idx, len;
+ struct tf_shadow_tbl_ctxt *ctxt;
+ struct tf_shadow_tbl_db *shadow_db;
+ struct tf_shadow_tbl_shadow_key_entry *sk_entry;
+ struct tf_shadow_tbl_shadow_result_entry *sr_entry;
+
+ if (!parms || !TF_SHADOW_HB_HANDLE_IS_VALID(parms->hb_handle) ||
+ !parms->data) {
+ TFP_DRV_LOG(ERR, "Invalid parms\n");
+ return -EINVAL;
+ }
+
+ shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
+ ctxt = tf_shadow_tbl_ctxt_get(shadow_db, parms->type);
+ if (!ctxt) {
+ TFP_DRV_LOG(DEBUG, "%s no ctxt for table\n",
+ tf_tbl_type_2_str(parms->type));
+ return -EINVAL;
+ }
+
+ idx = TF_SHADOW_IDX_TO_SHIDX(ctxt, parms->idx);
+ len = parms->data_sz_in_bytes;
+ if (idx >= tf_shadow_tbl_sh_num_entries_get(ctxt) ||
+ len > TF_SHADOW_MAX_KEY_SZ) {
+ TFP_DRV_LOG(ERR, "%s:%s Invalid len (%d) > %d || oob idx %d\n",
+ tf_dir_2_str(parms->dir),
+ tf_tbl_type_2_str(parms->type),
+ len,
+ TF_SHADOW_MAX_KEY_SZ, idx);
+
+ return -EINVAL;
+ }
+
+ rc = tf_shadow_tbl_set_hash_entry(ctxt, parms->hb_handle, idx);
+ if (rc)
+ return -EINVAL;
+
+ sk_entry = &ctxt->shadow_ctxt.sh_key_tbl[idx];
+ sr_entry = &ctxt->shadow_ctxt.sh_res_tbl[idx];
+
+ /* For tables, the data is the key */
+ memcpy(sk_entry->key, parms->data, len);
+
+ /* Write the result table */
+ sr_entry->key_size = len;
+ sr_entry->hb_handle = parms->hb_handle;
+ sr_entry->refcnt = 1;
+
return 0;
}
+/**
+ * Deletes hash/shadow information if no more references.
+ *
+ * Returns 0 - The caller should delete the table entry in hardware.
+ * Returns non-zero - The number of references to the entry
+ */
int
-tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms __rte_unused)
+tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms)
{
+ uint16_t idx;
+ uint32_t hb_handle;
+ struct tf_shadow_tbl_ctxt *ctxt;
+ struct tf_shadow_tbl_db *shadow_db;
+ struct tf_tbl_free_parms *fparms;
+ struct tf_shadow_tbl_shadow_result_entry *sr_entry;
+
+ if (!parms || !parms->fparms) {
+ TFP_DRV_LOG(ERR, "Invalid parms\n");
+ return -EINVAL;
+ }
+
+ fparms = parms->fparms;
+ if (!tf_shadow_tbl_is_searchable(fparms->type))
+ return 0;
+ /*
+ * Initialize the ref count to zero. The default would be to remove
+ * the entry.
+ */
+ fparms->ref_cnt = 0;
+
+ shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
+ ctxt = tf_shadow_tbl_ctxt_get(shadow_db, fparms->type);
+ if (!ctxt) {
+ TFP_DRV_LOG(DEBUG, "%s no ctxt for table\n",
+ tf_tbl_type_2_str(fparms->type));
+ return 0;
+ }
+
+ idx = TF_SHADOW_IDX_TO_SHIDX(ctxt, fparms->idx);
+ if (idx >= tf_shadow_tbl_sh_num_entries_get(ctxt)) {
+ TFP_DRV_LOG(DEBUG, "%s %d >= %d\n",
+ tf_tbl_type_2_str(fparms->type),
+ fparms->idx,
+ tf_shadow_tbl_sh_num_entries_get(ctxt));
+ return 0;
+ }
+
+ sr_entry = &ctxt->shadow_ctxt.sh_res_tbl[idx];
+ if (sr_entry->refcnt <= 1) {
+ hb_handle = sr_entry->hb_handle;
+ tf_shadow_tbl_clear_hash_entry(ctxt, hb_handle);
+ tf_shadow_tbl_clear_sh_entry(ctxt, idx);
+ } else {
+ sr_entry->refcnt--;
+ fparms->ref_cnt = sr_entry->refcnt;
+ }
+
return 0;
}
int
-tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms __rte_unused)
+tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms)
{
+ uint16_t len;
+ uint64_t bucket;
+ uint32_t i, hid32;
+ struct tf_shadow_tbl_ctxt *ctxt;
+ struct tf_shadow_tbl_db *shadow_db;
+ uint16_t hid16, hb_idx, hid_mask, shtbl_idx, shtbl_key, be_valid;
+ struct tf_tbl_alloc_search_parms *sparms;
+ uint32_t be_avail = TF_SHADOW_HB_NUM_ELEM;
+
+ if (!parms || !parms->sparms) {
+ TFP_DRV_LOG(ERR, "tbl search with invalid parms\n");
+ return -EINVAL;
+ }
+
+ sparms = parms->sparms;
+ /* Check that caller was supposed to call search */
+ if (!tf_shadow_tbl_is_searchable(sparms->type))
+ return -EINVAL;
+
+ /* Initialize return values to invalid */
+ sparms->hit = 0;
+ sparms->search_status = REJECT;
+ parms->hb_handle = 0;
+ sparms->ref_cnt = 0;
+
+ shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
+ ctxt = tf_shadow_tbl_ctxt_get(shadow_db, sparms->type);
+ if (!ctxt) {
+ TFP_DRV_LOG(ERR, "%s Unable to get tbl mgr context\n",
+ tf_tbl_type_2_str(sparms->type));
+ return -EINVAL;
+ }
+
+ len = sparms->result_sz_in_bytes;
+ if (len > TF_SHADOW_MAX_KEY_SZ || !sparms->result || !len) {
+ TFP_DRV_LOG(ERR, "%s:%s Invalid parms %d : %p\n",
+ tf_dir_2_str(sparms->dir),
+ tf_tbl_type_2_str(sparms->type),
+ len,
+ sparms->result);
+ return -EINVAL;
+ }
+
+ /*
+ * Calculate the crc32
+ * Fold it to create a 16b value
+ * Reduce it to fit the table
+ */
+ hid32 = tf_hash_calc_crc32(sparms->result, len);
+ hid16 = (uint16_t)(((hid32 >> 16) & 0xffff) ^ (hid32 & 0xffff));
+ hid_mask = ctxt->hash_ctxt.hid_mask;
+ hb_idx = hid16 & hid_mask;
+
+ bucket = ctxt->hash_ctxt.hashtbl[hb_idx];
+ if (!bucket) {
+ /* empty bucket means a miss and available entry */
+ sparms->search_status = MISS;
+ parms->hb_handle = TF_SHADOW_HB_HANDLE_CREATE(hb_idx, 0);
+ sparms->idx = 0;
+ return 0;
+ }
+
+ /* Set the avail to max so we can detect when there is an avail entry */
+ be_avail = TF_SHADOW_HB_NUM_ELEM;
+ for (i = 0; i < TF_SHADOW_HB_NUM_ELEM; i++) {
+ shtbl_idx = (uint16_t)((bucket >> (i * 16)) & 0xffff);
+ be_valid = TF_SHADOW_BE_IS_VALID(shtbl_idx);
+ if (!be_valid) {
+ /* The element is avail, keep going */
+ be_avail = i;
+ continue;
+ }
+ /* There is a valid entry, compare it */
+ shtbl_key = shtbl_idx & ~TF_SHADOW_BE_VALID;
+ if (!tf_shadow_tbl_key_cmp(ctxt,
+ sparms->result,
+ shtbl_key,
+ len)) {
+ /*
+ * It matches, increment the ref count if the caller
+ * requested allocation and return the info
+ */
+ if (sparms->alloc)
+ ctxt->shadow_ctxt.sh_res_tbl[shtbl_key].refcnt =
+ ctxt->shadow_ctxt.sh_res_tbl[shtbl_key].refcnt + 1;
+
+ sparms->hit = 1;
+ sparms->search_status = HIT;
+ parms->hb_handle =
+ TF_SHADOW_HB_HANDLE_CREATE(hb_idx, i);
+ sparms->idx = TF_SHADOW_SHIDX_TO_IDX(ctxt, shtbl_key);
+ sparms->ref_cnt =
+ ctxt->shadow_ctxt.sh_res_tbl[shtbl_key].refcnt;
+
+ return 0;
+ }
+ }
+
+ /* No hits, return avail entry if exists */
+ if (be_avail < TF_SHADOW_HB_NUM_ELEM) {
+ /*
+ * There is an available hash entry, so return MISS and the
+ * hash handle for the subsequent bind.
+ */
+ parms->hb_handle = TF_SHADOW_HB_HANDLE_CREATE(hb_idx, be_avail);
+ sparms->search_status = MISS;
+ sparms->hit = 0;
+ sparms->idx = 0;
+ } else {
+ /* No room for the entry in the hash table, must REJECT */
+ sparms->search_status = REJECT;
+ }
+
return 0;
}
int
-tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms __rte_unused)
+tf_shadow_tbl_insert(struct tf_shadow_tbl_insert_parms *parms)
{
+ uint16_t idx;
+ struct tf_shadow_tbl_ctxt *ctxt;
+ struct tf_tbl_set_parms *sparms;
+ struct tf_shadow_tbl_db *shadow_db;
+ struct tf_shadow_tbl_shadow_result_entry *sr_entry;
+
+ if (!parms || !parms->sparms) {
+ TFP_DRV_LOG(ERR, "Null parms\n");
+ return -EINVAL;
+ }
+
+ sparms = parms->sparms;
+ if (!sparms->data || !sparms->data_sz_in_bytes) {
+ TFP_DRV_LOG(ERR, "%s:%s No result to set.\n",
+ tf_dir_2_str(sparms->dir),
+ tf_tbl_type_2_str(sparms->type));
+ return -EINVAL;
+ }
+
+ shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
+ ctxt = tf_shadow_tbl_ctxt_get(shadow_db, sparms->type);
+ if (!ctxt) {
+ /* We aren't tracking this table, so return success */
+ TFP_DRV_LOG(DEBUG, "%s Unable to get tbl mgr context\n",
+ tf_tbl_type_2_str(sparms->type));
+ return 0;
+ }
+
+ idx = TF_SHADOW_IDX_TO_SHIDX(ctxt, sparms->idx);
+ if (idx >= tf_shadow_tbl_sh_num_entries_get(ctxt)) {
+ TFP_DRV_LOG(ERR, "%s:%s Invalid idx(0x%x)\n",
+ tf_dir_2_str(sparms->dir),
+ tf_tbl_type_2_str(sparms->type),
+ sparms->idx);
+ return -EINVAL;
+ }
+
+ /* Write the result table, the key/hash has been written already */
+ sr_entry = &ctxt->shadow_ctxt.sh_res_tbl[idx];
+
+ /*
+ * If the handle is not valid, the bind was never called. We aren't
+ * tracking this entry.
+ */
+ if (!TF_SHADOW_HB_HANDLE_IS_VALID(sr_entry->hb_handle))
+ return 0;
+
+ sr_entry->refcnt = 1;
+
return 0;
}
int
-tf_shadow_tbl_remove(struct tf_shadow_tbl_remove_parms *parms __rte_unused)
+tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms)
{
+ struct tf_shadow_tbl_db *shadow_db;
+ int i;
+
+ TF_CHECK_PARMS1(parms);
+
+ shadow_db = (struct tf_shadow_tbl_db *)parms->shadow_db;
+ if (!shadow_db) {
+ TFP_DRV_LOG(DEBUG, "Shadow db is NULL cannot be freed\n");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < TF_TBL_TYPE_MAX; i++) {
+ if (shadow_db->ctxt[i]) {
+ tf_shadow_tbl_ctxt_delete(shadow_db->ctxt[i]);
+ tfp_free(shadow_db->ctxt[i]);
+ }
+ }
+
+ tfp_free(shadow_db);
+
return 0;
}
+
+/**
+ * Allocate the table resources for search and allocate
+ *
+ */
+int tf_shadow_tbl_create_db(struct tf_shadow_tbl_create_db_parms *parms)
+{
+ int rc;
+ int i;
+ uint16_t base;
+ struct tfp_calloc_parms cparms;
+ struct tf_shadow_tbl_db *shadow_db = NULL;
+
+ TF_CHECK_PARMS1(parms);
+
+ /* Build the shadow DB per the request */
+ cparms.nitems = 1;
+ cparms.size = sizeof(struct tf_shadow_tbl_db);
+ cparms.alignment = 0;
+ rc = tfp_calloc(&cparms);
+ if (rc)
+ return rc;
+ shadow_db = (void *)cparms.mem_va;
+
+ for (i = 0; i < TF_TBL_TYPE_MAX; i++) {
+ /* If the element didn't request an allocation no need
+ * to create a pool nor verify if we got a reservation.
+ */
+ if (!parms->cfg->alloc_cnt[i] ||
+ !tf_shadow_tbl_is_searchable(i)) {
+ shadow_db->ctxt[i] = NULL;
+ continue;
+ }
+
+ cparms.nitems = 1;
+ cparms.size = sizeof(struct tf_shadow_tbl_ctxt);
+ cparms.alignment = 0;
+ rc = tfp_calloc(&cparms);
+ if (rc)
+ goto error;
+
+ shadow_db->ctxt[i] = cparms.mem_va;
+ base = parms->cfg->base_addr[i];
+ rc = tf_shadow_tbl_ctxt_create(shadow_db->ctxt[i],
+ parms->cfg->alloc_cnt[i],
+ base);
+ if (rc)
+ goto error;
+ }
+
+ *parms->shadow_db = (void *)shadow_db;
+
+ TFP_DRV_LOG(INFO,
+ "TF SHADOW TABLE - initialized\n");
+
+ return 0;
+error:
+ for (i = 0; i < TF_TBL_TYPE_MAX; i++) {
+ if (shadow_db->ctxt[i]) {
+ tf_shadow_tbl_ctxt_delete(shadow_db->ctxt[i]);
+ tfp_free(shadow_db->ctxt[i]);
+ }
+ }
+
+ tfp_free(shadow_db);
+
+ return -ENOMEM;
+}
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
index dfd336e53..e73381f25 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tbl.h
@@ -8,8 +8,6 @@
#include "tf_core.h"
-struct tf;
-
/**
* The Shadow Table module provides shadow DB handling for table based
* TF types. A shadow DB provides the capability that allows for reuse
@@ -32,19 +30,22 @@ struct tf;
*/
struct tf_shadow_tbl_cfg_parms {
/**
- * TF Table type
+ * [in] The number of elements in the alloc_cnt and base_addr
+ * For now, it should always be equal to TF_TBL_TYPE_MAX
*/
- enum tf_tbl_type type;
+ int num_entries;
/**
- * Number of entries the Shadow DB needs to hold
+ * [in] Resource allocation count array
+ * This array content originates from the tf_session_resources
+ * that is passed in on session open
+ * Array size is TF_TBL_TYPE_MAX
*/
- int num_entries;
-
+ uint16_t *alloc_cnt;
/**
- * Element width for this table type
+ * [in] The base index for each table
*/
- int element_width;
+ uint16_t base_addr[TF_TBL_TYPE_MAX];
};
/**
@@ -52,17 +53,17 @@ struct tf_shadow_tbl_cfg_parms {
*/
struct tf_shadow_tbl_create_db_parms {
/**
- * [in] Configuration information for the shadow db
+ * [in] Receive or transmit direction
*/
- struct tf_shadow_tbl_cfg_parms *cfg;
+ enum tf_dir dir;
/**
- * [in] Number of elements in the parms structure
+ * [in] Configuration information for the shadow db
*/
- uint16_t num_elements;
+ struct tf_shadow_tbl_cfg_parms *cfg;
/**
* [out] Shadow table DB handle
*/
- void *tf_shadow_tbl_db;
+ void **shadow_db;
};
/**
@@ -70,9 +71,9 @@ struct tf_shadow_tbl_create_db_parms {
*/
struct tf_shadow_tbl_free_db_parms {
/**
- * Shadow table DB handle
+ * [in] Shadow table DB handle
*/
- void *tf_shadow_tbl_db;
+ void *shadow_db;
};
/**
@@ -82,79 +83,77 @@ struct tf_shadow_tbl_search_parms {
/**
* [in] Shadow table DB handle
*/
- void *tf_shadow_tbl_db;
+ void *shadow_db;
/**
- * [in] Table type
+ * [inout] The search parms from tf core
*/
- enum tf_tbl_type type;
- /**
- * [in] Pointer to entry blob value in remap table to match
- */
- uint8_t *entry;
- /**
- * [in] Size of the entry blob passed in bytes
- */
- uint16_t entry_sz;
- /**
- * [out] Index of the found element returned if hit
- */
- uint16_t *index;
+ struct tf_tbl_alloc_search_parms *sparms;
/**
* [out] Reference count incremented if hit
*/
- uint16_t *ref_cnt;
+ uint32_t hb_handle;
};
/**
- * Shadow table insert parameters
+ * Shadow Table bind index parameters
*/
-struct tf_shadow_tbl_insert_parms {
+struct tf_shadow_tbl_bind_index_parms {
/**
- * [in] Shadow table DB handle
+ * [in] Shadow tcam DB handle
*/
- void *tf_shadow_tbl_db;
+ void *shadow_db;
/**
- * [in] Tbl type
+ * [in] receive or transmit direction
+ */
+ enum tf_dir dir;
+ /**
+ * [in] TCAM table type
*/
enum tf_tbl_type type;
/**
- * [in] Pointer to entry blob value in remap table to match
+ * [in] index of the entry to program
*/
- uint8_t *entry;
+ uint16_t idx;
/**
- * [in] Size of the entry blob passed in bytes
+ * [in] struct containing key
*/
- uint16_t entry_sz;
+ uint8_t *data;
/**
- * [in] Entry to update
+ * [in] data size in bytes
*/
- uint16_t index;
+ uint16_t data_sz_in_bytes;
/**
- * [out] Reference count after insert
+ * [in] The hash bucket handled returned from the search
*/
- uint16_t *ref_cnt;
+ uint32_t hb_handle;
};
/**
- * Shadow table remove parameters
+ * Shadow table insert parameters
*/
-struct tf_shadow_tbl_remove_parms {
+struct tf_shadow_tbl_insert_parms {
/**
* [in] Shadow table DB handle
*/
- void *tf_shadow_tbl_db;
+ void *shadow_db;
/**
- * [in] Tbl type
+ * [in] The insert parms from tf core
*/
- enum tf_tbl_type type;
+ struct tf_tbl_set_parms *sparms;
+};
+
+/**
+ * Shadow table remove parameters
+ */
+struct tf_shadow_tbl_remove_parms {
/**
- * [in] Entry to update
+ * [in] Shadow table DB handle
*/
- uint16_t index;
+ void *shadow_db;
/**
- * [out] Reference count after removal
+ * [in] The free parms from tf core
*/
- uint16_t *ref_cnt;
+ struct tf_tbl_free_parms *fparms;
};
/**
@@ -206,9 +205,26 @@ int tf_shadow_tbl_free_db(struct tf_shadow_tbl_free_db_parms *parms);
* Returns
* - (0) if successful, element was found.
* - (-EINVAL) on failure.
+ *
+ * If there is a miss, but there is room for insertion, the hb_handle returned
+ * is used for insertion during the bind index API
*/
int tf_shadow_tbl_search(struct tf_shadow_tbl_search_parms *parms);
+/**
+ * Bind Shadow table db hash and result tables with result from search/alloc
+ *
+ * [in] parms
+ * Pointer to the search parameters
+ *
+ * Returns
+ * - (0) if successful
+ * - (-EINVAL) on failure.
+ *
+ * This is only called after a MISS in the search returns a hb_handle
+ */
+int tf_shadow_tbl_bind_index(struct tf_shadow_tbl_bind_index_parms *parms);
+
/**
* Inserts an element into the Shadow table DB. Will fail if the
* elements ref_count is different from 0. Ref_count after insert will
diff --git a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
index beaea0340..a0130d6a8 100644
--- a/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
+++ b/drivers/net/bnxt/tf_core/tf_shadow_tcam.c
@@ -373,6 +373,12 @@ tf_shadow_tcam_clear_hash_entry(struct tf_shadow_tcam_ctxt *ctxt,
case 3:
*bucket = TF_SHADOW_TCAM_BE2_MASK_CLEAR(*bucket);
break;
+ default:
+ /*
+ * Since the BE_GET masks non-inclusive bits, this will not
+ * happen.
+ */
+ break;
}
}
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c
index 9ebaa34e4..bec52105e 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.c
+++ b/drivers/net/bnxt/tf_core/tf_tbl.c
@@ -13,6 +13,9 @@
#include "tf_util.h"
#include "tf_msg.h"
#include "tfp.h"
+#include "tf_shadow_tbl.h"
+#include "tf_session.h"
+#include "tf_device.h"
struct tf;
@@ -25,7 +28,7 @@ static void *tbl_db[TF_DIR_MAX];
/**
* Table Shadow DBs
*/
-/* static void *shadow_tbl_db[TF_DIR_MAX]; */
+static void *shadow_tbl_db[TF_DIR_MAX];
/**
* Init flag, set on bind and cleared on unbind
@@ -35,14 +38,19 @@ static uint8_t init;
/**
* Shadow init flag, set on bind and cleared on unbind
*/
-/* static uint8_t shadow_init; */
+static uint8_t shadow_init;
int
tf_tbl_bind(struct tf *tfp,
struct tf_tbl_cfg_parms *parms)
{
- int rc;
- int i;
+ int rc, d, i;
+ struct tf_rm_alloc_info info;
+ struct tf_rm_free_db_parms fparms;
+ struct tf_shadow_tbl_free_db_parms fshadow;
+ struct tf_rm_get_alloc_info_parms ainfo;
+ struct tf_shadow_tbl_cfg_parms shadow_cfg;
+ struct tf_shadow_tbl_create_db_parms shadow_cdb;
struct tf_rm_create_db_parms db_cfg = { 0 };
TF_CHECK_PARMS2(tfp, parms);
@@ -58,26 +66,86 @@ tf_tbl_bind(struct tf *tfp,
db_cfg.num_elements = parms->num_elements;
db_cfg.cfg = parms->cfg;
- for (i = 0; i < TF_DIR_MAX; i++) {
- db_cfg.dir = i;
- db_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;
- db_cfg.rm_db = &tbl_db[i];
+ for (d = 0; d < TF_DIR_MAX; d++) {
+ db_cfg.dir = d;
+ db_cfg.alloc_cnt = parms->resources->tbl_cnt[d].cnt;
+ db_cfg.rm_db = &tbl_db[d];
rc = tf_rm_create_db(tfp, &db_cfg);
if (rc) {
TFP_DRV_LOG(ERR,
"%s: Table DB creation failed\n",
- tf_dir_2_str(i));
+ tf_dir_2_str(d));
return rc;
}
}
+ /* Initialize the Shadow Table. */
+ if (parms->shadow_copy) {
+ for (d = 0; d < TF_DIR_MAX; d++) {
+ memset(&shadow_cfg, 0, sizeof(shadow_cfg));
+ memset(&shadow_cdb, 0, sizeof(shadow_cdb));
+ /* Get the base addresses of the tables */
+ for (i = 0; i < TF_TBL_TYPE_MAX; i++) {
+ memset(&info, 0, sizeof(info));
+
+ if (!parms->resources->tbl_cnt[d].cnt[i])
+ continue;
+ ainfo.rm_db = tbl_db[d];
+ ainfo.db_index = i;
+ ainfo.info = &info;
+ rc = tf_rm_get_info(&ainfo);
+ if (rc)
+ goto error;
+
+ shadow_cfg.base_addr[i] = info.entry.start;
+ }
+
+ /* Create the shadow db */
+ shadow_cfg.alloc_cnt =
+ parms->resources->tbl_cnt[d].cnt;
+ shadow_cfg.num_entries = parms->num_elements;
+
+ shadow_cdb.shadow_db = &shadow_tbl_db[d];
+ shadow_cdb.cfg = &shadow_cfg;
+ rc = tf_shadow_tbl_create_db(&shadow_cdb);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "Shadow TBL DB creation failed "
+ "rc=%d\n", rc);
+ goto error;
+ }
+ }
+ shadow_init = 1;
+ }
+
init = 1;
TFP_DRV_LOG(INFO,
"Table Type - initialized\n");
return 0;
+error:
+ for (d = 0; d < TF_DIR_MAX; d++) {
+ memset(&fparms, 0, sizeof(fparms));
+ fparms.dir = d;
+ fparms.rm_db = tbl_db[d];
+ /* Ignoring return here since we are in the error case */
+ (void)tf_rm_free_db(tfp, &fparms);
+
+ if (parms->shadow_copy) {
+ fshadow.shadow_db = shadow_tbl_db[d];
+ tf_shadow_tbl_free_db(&fshadow);
+ shadow_tbl_db[d] = NULL;
+ }
+
+ tbl_db[d] = NULL;
+ }
+
+ shadow_init = 0;
+ init = 0;
+
+ return rc;
}
int
@@ -86,6 +154,7 @@ tf_tbl_unbind(struct tf *tfp)
int rc;
int i;
struct tf_rm_free_db_parms fparms = { 0 };
+ struct tf_shadow_tbl_free_db_parms fshadow;
TF_CHECK_PARMS1(tfp);
@@ -104,9 +173,17 @@ tf_tbl_unbind(struct tf *tfp)
return rc;
tbl_db[i] = NULL;
+
+ if (shadow_init) {
+ memset(&fshadow, 0, sizeof(fshadow));
+ fshadow.shadow_db = shadow_tbl_db[i];
+ tf_shadow_tbl_free_db(&fshadow);
+ shadow_tbl_db[i] = NULL;
+ }
}
init = 0;
+ shadow_init = 0;
return 0;
}
@@ -153,6 +230,7 @@ tf_tbl_free(struct tf *tfp __rte_unused,
int rc;
struct tf_rm_is_allocated_parms aparms = { 0 };
struct tf_rm_free_parms fparms = { 0 };
+ struct tf_shadow_tbl_remove_parms shparms;
int allocated = 0;
TF_CHECK_PARMS2(tfp, parms);
@@ -182,6 +260,36 @@ tf_tbl_free(struct tf *tfp __rte_unused,
return -EINVAL;
}
+ /*
+ * The Shadow mgmt, if enabled, determines if the entry needs
+ * to be deleted.
+ */
+ if (shadow_init) {
+ memset(&shparms, 0, sizeof(shparms));
+ shparms.shadow_db = shadow_tbl_db[parms->dir];
+ shparms.fparms = parms;
+ rc = tf_shadow_tbl_remove(&shparms);
+ if (rc) {
+ /*
+ * Should not get here, log it and let the entry be
+ * deleted.
+ */
+ TFP_DRV_LOG(ERR, "%s: Shadow free fail, "
+ "type:%d index:%d deleting the entry.\n",
+ tf_dir_2_str(parms->dir),
+ parms->type,
+ parms->idx);
+ } else {
+ /*
+ * If the entry still has references, just return the
+ * ref count to the caller. No need to remove entry
+ * from rm.
+ */
+ if (parms->ref_cnt >= 1)
+ return rc;
+ }
+ }
+
/* Free requested element */
fparms.rm_db = tbl_db[parms->dir];
fparms.db_index = parms->type;
@@ -200,10 +308,124 @@ tf_tbl_free(struct tf *tfp __rte_unused,
}
int
-tf_tbl_alloc_search(struct tf *tfp __rte_unused,
- struct tf_tbl_alloc_search_parms *parms __rte_unused)
+tf_tbl_alloc_search(struct tf *tfp,
+ struct tf_tbl_alloc_search_parms *parms)
{
- return 0;
+ int rc, frc;
+ uint32_t idx;
+ struct tf_session *tfs;
+ struct tf_dev_info *dev;
+ struct tf_tbl_alloc_parms aparms;
+ struct tf_shadow_tbl_search_parms sparms;
+ struct tf_shadow_tbl_bind_index_parms bparms;
+ struct tf_tbl_free_parms fparms;
+
+ TF_CHECK_PARMS2(tfp, parms);
+
+ if (!shadow_init || !shadow_tbl_db[parms->dir]) {
+ TFP_DRV_LOG(ERR, "%s: Shadow TBL not initialized.\n",
+ tf_dir_2_str(parms->dir));
+ return -EINVAL;
+ }
+
+ memset(&sparms, 0, sizeof(sparms));
+ sparms.sparms = parms;
+ sparms.shadow_db = shadow_tbl_db[parms->dir];
+ rc = tf_shadow_tbl_search(&sparms);
+ if (rc)
+ return rc;
+
+ /*
+ * The app didn't request us to alloc the entry, so return now.
+ * The hit should have been updated in the original search parm.
+ */
+ if (!parms->alloc || parms->search_status != MISS)
+ return rc;
+
+ /* Retrieve the session information */
+ rc = tf_session_get_session(tfp, &tfs);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "%s: Failed to lookup session, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ /* Retrieve the device information */
+ rc = tf_session_get_device(tfs, &dev);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "%s: Failed to lookup device, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ /* Allocate the index */
+ if (dev->ops->tf_dev_alloc_tbl == NULL) {
+ rc = -EOPNOTSUPP;
+ TFP_DRV_LOG(ERR,
+ "%s: Operation not supported, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return -EOPNOTSUPP;
+ }
+
+ memset(&aparms, 0, sizeof(aparms));
+ aparms.dir = parms->dir;
+ aparms.type = parms->type;
+ aparms.tbl_scope_id = parms->tbl_scope_id;
+ aparms.idx = &idx;
+ rc = dev->ops->tf_dev_alloc_tbl(tfp, &aparms);
+ if (rc) {
+ TFP_DRV_LOG(ERR,
+ "%s: Table allocation failed, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ /* Bind the allocated index to the data */
+ memset(&bparms, 0, sizeof(bparms));
+ bparms.shadow_db = shadow_tbl_db[parms->dir];
+ bparms.dir = parms->dir;
+ bparms.type = parms->type;
+ bparms.idx = idx;
+ bparms.data = parms->result;
+ bparms.data_sz_in_bytes = parms->result_sz_in_bytes;
+ bparms.hb_handle = sparms.hb_handle;
+ rc = tf_shadow_tbl_bind_index(&bparms);
+ if (rc) {
+ /* Error binding entry, need to free the allocated idx */
+ if (dev->ops->tf_dev_free_tbl == NULL) {
+ rc = -EOPNOTSUPP;
+ TFP_DRV_LOG(ERR,
+ "%s: Operation not supported, rc:%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-rc));
+ return rc;
+ }
+
+ memset(&fparms, 0, sizeof(fparms));
+ fparms.dir = parms->dir;
+ fparms.type = parms->type;
+ fparms.idx = idx;
+ frc = dev->ops->tf_dev_free_tbl(tfp, &fparms);
+ if (frc) {
+ TFP_DRV_LOG(ERR,
+ "%s: Failed free index allocated during "
+ "search. rc=%s\n",
+ tf_dir_2_str(parms->dir),
+ strerror(-frc));
+ /* return the original failure. */
+ return rc;
+ }
+ }
+
+ parms->idx = idx;
+
+ return rc;
}
int
diff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h
index f20e8d729..930fcc324 100644
--- a/drivers/net/bnxt/tf_core/tf_tbl.h
+++ b/drivers/net/bnxt/tf_core/tf_tbl.h
@@ -144,29 +144,31 @@ struct tf_tbl_alloc_search_parms {
*/
uint32_t tbl_scope_id;
/**
- * [in] Enable search for matching entry. If the table type is
- * internal the shadow copy will be searched before
- * alloc. Session must be configured with shadow copy enabled.
- */
- uint8_t search_enable;
- /**
- * [in] Result data to search for (if search_enable)
+ * [in] Result data to search for
*/
uint8_t *result;
/**
- * [in] Result data size in bytes (if search_enable)
+ * [in] Result data size in bytes
*/
uint16_t result_sz_in_bytes;
+ /**
+ * [in] Whether or not to allocate on MISS, 1 is allocate.
+ */
+ uint8_t alloc;
/**
* [out] If search_enable, set if matching entry found
*/
uint8_t hit;
/**
- * [out] Current ref count after allocation (if search_enable)
+ * [out] The status of the search (REJECT, MISS, HIT)
+ */
+ enum tf_search_status search_status;
+ /**
+ * [out] Current ref count after allocation
*/
uint16_t ref_cnt;
/**
- * [out] Idx of allocated entry or found entry (if search_enable)
+ * [out] Idx of allocated entry or found entry
*/
uint32_t idx;
};
diff --git a/drivers/net/bnxt/tf_core/tf_tcam.h b/drivers/net/bnxt/tf_core/tf_tcam.h
index 563b08c23..280f138dd 100644
--- a/drivers/net/bnxt/tf_core/tf_tcam.h
+++ b/drivers/net/bnxt/tf_core/tf_tcam.h
@@ -134,7 +134,7 @@ struct tf_tcam_alloc_search_parms {
/**
* [out] Search result status (hit, miss, reject)
*/
- enum tf_tcam_search_status search_status;
+ enum tf_search_status search_status;
/**
* [out] Current refcnt after allocation
*/
--
2.21.1 (Apple Git-122.3)
next prev parent reply other threads:[~2020-07-24 5:35 UTC|newest]
Thread overview: 102+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-23 11:13 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 01/20] net/bnxt: add shadow tcam capability with search Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 02/20] net/bnxt: nat global registers support Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 03/20] net/bnxt: parif for offload miss rules Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 04/20] net/bnxt: ulp mapper changes to use tcam search Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 05/20] net/bnxt: add tf hash API Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 06/20] net/bnxt: skip mark id injection into mbuf Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 07/20] net/bnxt: nat template changes Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 08/20] net/bnxt: configure parif for the egress rules Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 09/20] net/bnxt: ignore VLAN priority mask Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 10/20] net/bnxt: add egress template with VLAN tag match Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 11/20] net/bnxt: modify tf shadow tcam to use common tf hash Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 12/20] net/bnxt: added shadow table capability with search Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 13/20] net/bnxt: ulp mapper changes to use tbl search Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 14/20] net/bnxt: fix port default rule create and destroy Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 15/20] net/bnxt: delete VF FW rules when a representor is created Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 16/20] net/bnxt: shadow tcam and tbl reference count modification Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 17/20] net/bnxt: tcam table processing support for search and alloc Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 18/20] net/bnxt: added templates for search before alloc Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 19/20] net/bnxt: enabled shadow tables during session open Somnath Kotur
2020-07-23 11:13 ` [dpdk-dev] [PATCH 20/20] net/bnxt: cleanup of VF-representor dev ops Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 00/20] bnxt patches Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 01/20] net/bnxt: add shadow tcam capability with search Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 02/20] net/bnxt: nat global registers support Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 03/20] net/bnxt: parif for offload miss rules Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 04/20] net/bnxt: ulp mapper changes to use tcam search Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 05/20] net/bnxt: add tf hash API Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 06/20] net/bnxt: skip mark id injection into mbuf Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 07/20] net/bnxt: nat template changes Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 08/20] net/bnxt: configure parif for the egress rules Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 09/20] net/bnxt: ignore VLAN priority mask Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 10/20] net/bnxt: add egress template with VLAN tag match Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 11/20] net/bnxt: modify tf shadow tcam to use common tf hash Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 12/20] net/bnxt: added shadow table capability with search Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 13/20] net/bnxt: ulp mapper changes to use tbl search Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 14/20] net/bnxt: fix port default rule create and destroy Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 15/20] net/bnxt: delete VF FW rules when a representor is created Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 16/20] net/bnxt: shadow tcam and tbl reference count modification Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 17/20] net/bnxt: tcam table processing support for search and alloc Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 18/20] net/bnxt: added templates for search before alloc Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 19/20] net/bnxt: enabled shadow tables during session open Somnath Kotur
2020-07-23 11:56 ` [dpdk-dev] [PATCH v2 20/20] net/bnxt: cleanup of VF-representor dev ops Somnath Kotur
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 00/22] bnxt patches Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 01/22] net/bnxt: add shadow and search capability to tcam Ajit Khaparde
2020-07-24 18:04 ` Stephen Hemminger
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 02/22] net/bnxt: add access to nat global register Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 03/22] net/bnxt: configure parif for offload miss rules Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 04/22] net/bnxt: modify ulp mapper to use tcam search Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 05/22] net/bnxt: add tf hash API Ajit Khaparde
2020-07-27 10:32 ` Ferruh Yigit
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 06/22] net/bnxt: skip mark id injection into mbuf Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 07/22] net/bnxt: update nat template Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 08/22] net/bnxt: configure parif for the egress rules Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 09/22] net/bnxt: ignore VLAN priority mask Ajit Khaparde
2020-07-27 10:30 ` Ferruh Yigit
2020-07-28 5:22 ` Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 10/22] net/bnxt: add egress template with VLAN tag match Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 11/22] net/bnxt: modify tf shadow tcam to use tf hash Ajit Khaparde
2020-07-24 5:32 ` Ajit Khaparde [this message]
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 13/22] net/bnxt: modify ulp mapper to use tbl search Ajit Khaparde
2020-07-27 10:36 ` Ferruh Yigit
2020-07-27 10:50 ` Somnath Kotur
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 14/22] net/bnxt: fix port default rule create and destroy Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 15/22] net/bnxt: delete VF FW rules on representor create Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 16/22] net/bnxt: modify shadow tcam and tbl reference count logic Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 17/22] net/bnxt: add tcam table processing for search and alloc Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 18/22] net/bnxt: add templates for search before alloc Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 19/22] net/bnxt: enable shadow tables during session open Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 20/22] net/bnxt: cleanup VF-representor dev ops Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 21/22] net/bnxt: fix if condition Ajit Khaparde
2020-07-24 5:32 ` [dpdk-dev] [PATCH v3 22/22] net/bnxt: fix build error with extra cflags Ajit Khaparde
2020-07-24 16:48 ` [dpdk-dev] [PATCH v3 00/22] bnxt patches Ajit Khaparde
2020-07-27 10:42 ` Ferruh Yigit
2020-07-28 5:20 ` Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 " Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 01/22] net/bnxt: add shadow and search capability to tcam Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 02/22] net/bnxt: add access to nat global register Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 03/22] net/bnxt: configure parif for offload miss rules Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 04/22] net/bnxt: modify ulp mapper to use tcam search Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 05/22] net/bnxt: add TruFlow hash API Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 06/22] net/bnxt: fix mark id update to mbuf Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 07/22] net/bnxt: fix nat template Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 08/22] net/bnxt: configure parif for the egress rules Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 09/22] net/bnxt: ignore VLAN priority mask Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 10/22] net/bnxt: add egress template with VLAN tag match Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 11/22] net/bnxt: update shadow tcam to use TruFlow hash Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 12/22] net/bnxt: add shadow table capability with search Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 13/22] net/bnxt: modify ulp mapper to use tbl search Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 14/22] net/bnxt: fix port default rule create and destroy Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 15/22] net/bnxt: fix FW rule deletion on representor create Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 16/22] net/bnxt: fix table reference count for shadow tcam Ajit Khaparde
2020-07-28 17:00 ` Ferruh Yigit
2020-07-28 17:33 ` Ajit Khaparde
2020-07-28 17:38 ` Ferruh Yigit
2020-07-28 18:06 ` Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 17/22] net/bnxt: add tcam table processing for search and alloc Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 18/22] net/bnxt: add templates for search before alloc Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 19/22] net/bnxt: enable shadow tables during session open Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 20/22] net/bnxt: cleanup VF-representor dev ops Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 21/22] net/bnxt: fix if condition Ajit Khaparde
2020-07-28 6:34 ` [dpdk-dev] [PATCH v4 22/22] net/bnxt: fix build error with extra cflags Ajit Khaparde
2020-07-28 14:20 ` [dpdk-dev] [PATCH v4 00/22] bnxt patches Ajit Khaparde
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200724053235.71069-13-ajit.khaparde@broadcom.com \
--to=ajit.khaparde@broadcom.com \
--cc=dev@dpdk.org \
--cc=farah.smith@broadcom.com \
--cc=ferruh.yigit@intel.com \
--cc=michael.baucom@broadcom.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).