From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B7831A0563; Wed, 15 Apr 2020 10:20:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 173151D556; Wed, 15 Apr 2020 10:19:42 +0200 (CEST) Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by dpdk.org (Postfix) with ESMTP id 190391D54F for ; Wed, 15 Apr 2020 10:19:38 +0200 (CEST) Received: by mail-pf1-f195.google.com with SMTP id y25so1195378pfn.5 for ; Wed, 15 Apr 2020 01:19:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TxLrcziIUZ1baVSl4xiYntVALWn5Pa52A50uYp7mOi8=; b=e7+yRND/Pe1w+OG4bURi2k2skNAhH3bCqI96k8UPUVhAbHf6v11LjJ4eR6z1zAP3A8 nx5jMU4SDHA79/47atxvyJodzhvVx8TXXZIOJoo9Nlf68r/WDdPyuU/a1DDedU7SU1xF UKQELA6W8nawExDsFwEsZZg4ynfgka3/0XcuM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TxLrcziIUZ1baVSl4xiYntVALWn5Pa52A50uYp7mOi8=; b=C6wlZoa0pK7//GP8qaTUtGGPaj3aBC8uCsQ2IC7qnLnUDOnSyE3UTGldL+9R8+oq4b vpXElEqli+PIevEy0gHXlfP8aCdJI68VO8Uu6qgzVQlPBcV0D3so3RLQYG17RrDtiHF8 1e2hFk5T/pcyOxo/COTgwex2tCPMTHdRIcMCYCsrr5WdfqDc4/cA9T4r1Fd3fub1e8G5 i8LELXPoV5oqDrKEtzcdRNZJ1xUG/hlueAxbqUFqLhthJOIRcyKlpj7ICE0HcHf0PHT3 wmeTjhxBQ+/X01mXR8hwIQEGEYQVo/ZY4V2+a7VzMv86U2HrtnnpEkpLV9wi8egdXcH+ 3BxA== X-Gm-Message-State: AGi0PuZvNOtiVCiKSoFyDdePhy2ioP+pCbukC7ziC+0Mbepg13fox93z pfH4jZ5bvfN+llH4Lj/fXuxi7pikIInVn/+j1GPldq6Rzj1glBse269VwszWBQ15uMDScYs7ioU vPwWYC4dyO0tkkrRL5k/WUFo4WtOAFw75i/vkKn0u5wR94i6bKv39rrdg1N2Lt4Iv7Rh5 X-Google-Smtp-Source: APiQypIBcQSLEl1g7BIlgkrEYedsEAXKGy7RADlsZuZ4V6Wm0V0XAAzGt0WZsorQYiZr7WJ57w7eJA== X-Received: by 2002:a63:ee4e:: with SMTP id n14mr864742pgk.442.1586938776111; Wed, 15 Apr 2020 01:19:36 -0700 (PDT) Received: from S60.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id fy21sm3819019pjb.25.2020.04.15.01.19.33 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Wed, 15 Apr 2020 01:19:35 -0700 (PDT) From: Venkat Duvvuru To: dev@dpdk.org Cc: Michael Wildt Date: Wed, 15 Apr 2020 13:48:42 +0530 Message-Id: <1586938751-32808-6-git-send-email-venkatkumar.duvvuru@broadcom.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1586938751-32808-1-git-send-email-venkatkumar.duvvuru@broadcom.com> References: <1586852011-37536-1-git-send-email-venkatkumar.duvvuru@broadcom.com> <1586938751-32808-1-git-send-email-venkatkumar.duvvuru@broadcom.com> Subject: [dpdk-dev] [PATCH v4 05/34] net/bnxt: add initial tf core session close support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Wildt - Add TruFlow session and resource support functions - Add Truflow session close API and related message support functions for both session and hw resources Signed-off-by: Michael Wildt Reviewed-by: Randy Schacher Reviewed-by: Ajit Kumar Khaparde --- drivers/net/bnxt/Makefile | 1 + drivers/net/bnxt/tf_core/bitalloc.c | 364 +++++++++++++++++++++++++++++ drivers/net/bnxt/tf_core/bitalloc.h | 119 ++++++++++ drivers/net/bnxt/tf_core/tf_core.c | 86 +++++++ drivers/net/bnxt/tf_core/tf_msg.c | 401 ++++++++++++++++++++++++++++++++ drivers/net/bnxt/tf_core/tf_msg.h | 42 ++++ drivers/net/bnxt/tf_core/tf_resources.h | 24 +- drivers/net/bnxt/tf_core/tf_rm.h | 113 +++++++++ drivers/net/bnxt/tf_core/tf_session.h | 1 + 9 files changed, 1146 insertions(+), 5 deletions(-) create mode 100644 drivers/net/bnxt/tf_core/bitalloc.c create mode 100644 drivers/net/bnxt/tf_core/bitalloc.h diff --git a/drivers/net/bnxt/Makefile b/drivers/net/bnxt/Makefile index 8a68059..8474673 100644 --- a/drivers/net/bnxt/Makefile +++ b/drivers/net/bnxt/Makefile @@ -48,6 +48,7 @@ CFLAGS += -I$(SRCDIR) -I$(SRCDIR)/tf_core endif SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_core.c +SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/bitalloc.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_msg.c SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tfp.c diff --git a/drivers/net/bnxt/tf_core/bitalloc.c b/drivers/net/bnxt/tf_core/bitalloc.c new file mode 100644 index 0000000..fb4df9a --- /dev/null +++ b/drivers/net/bnxt/tf_core/bitalloc.c @@ -0,0 +1,364 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019-2020 Broadcom + * All rights reserved. + */ + +#include "bitalloc.h" + +#define BITALLOC_MAX_LEVELS 6 + +/* Finds the first bit set plus 1, equivalent to gcc __builtin_ffs */ +static int +ba_ffs(bitalloc_word_t v) +{ + int c; /* c will be the number of zero bits on the right plus 1 */ + + v &= -v; + c = v ? 32 : 0; + + if (v & 0x0000FFFF) + c -= 16; + if (v & 0x00FF00FF) + c -= 8; + if (v & 0x0F0F0F0F) + c -= 4; + if (v & 0x33333333) + c -= 2; + if (v & 0x55555555) + c -= 1; + + return c; +} + +int +ba_init(struct bitalloc *pool, int size) +{ + bitalloc_word_t *mem = (bitalloc_word_t *)pool; + int i; + + /* Initialize */ + pool->size = 0; + + if (size < 1 || size > BITALLOC_MAX_SIZE) + return -1; + + /* Zero structure */ + for (i = 0; + i < (int)(BITALLOC_SIZEOF(size) / sizeof(bitalloc_word_t)); + i++) + mem[i] = 0; + + /* Initialize */ + pool->size = size; + + /* Embed number of words of next level, after each level */ + int words[BITALLOC_MAX_LEVELS]; + int lev = 0; + int offset = 0; + + words[0] = (size + 31) / 32; + while (words[lev] > 1) { + lev++; + words[lev] = (words[lev - 1] + 31) / 32; + } + + while (lev) { + offset += words[lev]; + pool->storage[offset++] = words[--lev]; + } + + /* Free the entire pool */ + for (i = 0; i < size; i++) + ba_free(pool, i); + + return 0; +} + +static int +ba_alloc_helper(struct bitalloc *pool, + int offset, + int words, + unsigned int size, + int index, + int *clear) +{ + bitalloc_word_t *storage = &pool->storage[offset]; + int loc = ba_ffs(storage[index]); + int r; + + if (loc == 0) + return -1; + + loc--; + + if (pool->size > size) { + r = ba_alloc_helper(pool, + offset + words + 1, + storage[words], + size * 32, + index * 32 + loc, + clear); + } else { + r = index * 32 + loc; + *clear = 1; + pool->free_count--; + } + + if (*clear) { + storage[index] &= ~(1 << loc); + *clear = (storage[index] == 0); + } + + return r; +} + +int +ba_alloc(struct bitalloc *pool) +{ + int clear = 0; + + return ba_alloc_helper(pool, 0, 1, 32, 0, &clear); +} + +static int +ba_alloc_index_helper(struct bitalloc *pool, + int offset, + int words, + unsigned int size, + int *index, + int *clear) +{ + bitalloc_word_t *storage = &pool->storage[offset]; + int loc; + int r; + + if (pool->size > size) + r = ba_alloc_index_helper(pool, + offset + words + 1, + storage[words], + size * 32, + index, + clear); + else + r = 1; /* Check if already allocated */ + + loc = (*index % 32); + *index = *index / 32; + + if (r == 1) { + r = (storage[*index] & (1 << loc)) ? 0 : -1; + if (r == 0) { + *clear = 1; + pool->free_count--; + } + } + + if (*clear) { + storage[*index] &= ~(1 << loc); + *clear = (storage[*index] == 0); + } + + return r; +} + +int +ba_alloc_index(struct bitalloc *pool, int index) +{ + int clear = 0; + int index_copy = index; + + if (index < 0 || index >= (int)pool->size) + return -1; + + if (ba_alloc_index_helper(pool, 0, 1, 32, &index_copy, &clear) >= 0) + return index; + else + return -1; +} + +static int +ba_inuse_helper(struct bitalloc *pool, + int offset, + int words, + unsigned int size, + int *index) +{ + bitalloc_word_t *storage = &pool->storage[offset]; + int loc; + int r; + + if (pool->size > size) + r = ba_inuse_helper(pool, + offset + words + 1, + storage[words], + size * 32, + index); + else + r = 1; /* Check if in use */ + + loc = (*index % 32); + *index = *index / 32; + + if (r == 1) + r = (storage[*index] & (1 << loc)) ? -1 : 0; + + return r; +} + +int +ba_inuse(struct bitalloc *pool, int index) +{ + if (index < 0 || index >= (int)pool->size) + return -1; + + return ba_inuse_helper(pool, 0, 1, 32, &index) == 0; +} + +static int +ba_free_helper(struct bitalloc *pool, + int offset, + int words, + unsigned int size, + int *index) +{ + bitalloc_word_t *storage = &pool->storage[offset]; + int loc; + int r; + + if (pool->size > size) + r = ba_free_helper(pool, + offset + words + 1, + storage[words], + size * 32, + index); + else + r = 1; /* Check if already free */ + + loc = (*index % 32); + *index = *index / 32; + + if (r == 1) { + r = (storage[*index] & (1 << loc)) ? -1 : 0; + if (r == 0) + pool->free_count++; + } + + if (r == 0) + storage[*index] |= (1 << loc); + + return r; +} + +int +ba_free(struct bitalloc *pool, int index) +{ + if (index < 0 || index >= (int)pool->size) + return -1; + + return ba_free_helper(pool, 0, 1, 32, &index); +} + +int +ba_inuse_free(struct bitalloc *pool, int index) +{ + if (index < 0 || index >= (int)pool->size) + return -1; + + return ba_free_helper(pool, 0, 1, 32, &index) + 1; +} + +int +ba_free_count(struct bitalloc *pool) +{ + return (int)pool->free_count; +} + +int ba_inuse_count(struct bitalloc *pool) +{ + return (int)(pool->size) - (int)(pool->free_count); +} + +static int +ba_find_next_helper(struct bitalloc *pool, + int offset, + int words, + unsigned int size, + int *index, + int free) +{ + bitalloc_word_t *storage = &pool->storage[offset]; + int loc, r, bottom = 0; + + if (pool->size > size) + r = ba_find_next_helper(pool, + offset + words + 1, + storage[words], + size * 32, + index, + free); + else + bottom = 1; /* Bottom of tree */ + + loc = (*index % 32); + *index = *index / 32; + + if (bottom) { + int bit_index = *index * 32; + + loc = ba_ffs(~storage[*index] & ((bitalloc_word_t)-1 << loc)); + if (loc > 0) { + loc--; + r = (bit_index + loc); + if (r >= (int)pool->size) + r = -1; + } else { + /* Loop over array at bottom of tree */ + r = -1; + bit_index += 32; + *index = *index + 1; + while ((int)pool->size > bit_index) { + loc = ba_ffs(~storage[*index]); + + if (loc > 0) { + loc--; + r = (bit_index + loc); + if (r >= (int)pool->size) + r = -1; + break; + } + bit_index += 32; + *index = *index + 1; + } + } + } + + if (r >= 0 && (free)) { + if (bottom) + pool->free_count++; + storage[*index] |= (1 << loc); + } + + return r; +} + +int +ba_find_next_inuse(struct bitalloc *pool, int index) +{ + if (index < 0 || + index >= (int)pool->size || + pool->free_count == pool->size) + return -1; + + return ba_find_next_helper(pool, 0, 1, 32, &index, 0); +} + +int +ba_find_next_inuse_free(struct bitalloc *pool, int index) +{ + if (index < 0 || + index >= (int)pool->size || + pool->free_count == pool->size) + return -1; + + return ba_find_next_helper(pool, 0, 1, 32, &index, 1); +} diff --git a/drivers/net/bnxt/tf_core/bitalloc.h b/drivers/net/bnxt/tf_core/bitalloc.h new file mode 100644 index 0000000..563c853 --- /dev/null +++ b/drivers/net/bnxt/tf_core/bitalloc.h @@ -0,0 +1,119 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2019-2020 Broadcom + * All rights reserved. + */ + +#ifndef _BITALLOC_H_ +#define _BITALLOC_H_ + +#include + +/* Bitalloc works on uint32_t as its word size */ +typedef uint32_t bitalloc_word_t; + +struct bitalloc { + bitalloc_word_t size; + bitalloc_word_t free_count; + bitalloc_word_t storage[1]; +}; + +#define BA_L0(s) (((s) + 31) / 32) +#define BA_L1(s) ((BA_L0(s) + 31) / 32) +#define BA_L2(s) ((BA_L1(s) + 31) / 32) +#define BA_L3(s) ((BA_L2(s) + 31) / 32) +#define BA_L4(s) ((BA_L3(s) + 31) / 32) + +#define BITALLOC_SIZEOF(size) \ + (sizeof(struct bitalloc) * \ + (((sizeof(struct bitalloc) + \ + sizeof(struct bitalloc) - 1 + \ + (sizeof(bitalloc_word_t) * \ + ((BA_L0(size) - 1) + \ + ((BA_L0(size) == 1) ? 0 : (BA_L1(size) + 1)) + \ + ((BA_L1(size) == 1) ? 0 : (BA_L2(size) + 1)) + \ + ((BA_L2(size) == 1) ? 0 : (BA_L3(size) + 1)) + \ + ((BA_L3(size) == 1) ? 0 : (BA_L4(size) + 1)))))) / \ + sizeof(struct bitalloc))) + +#define BITALLOC_MAX_SIZE (32 * 32 * 32 * 32 * 32 * 32) + +/* The instantiation of a bitalloc looks a bit odd. Since a + * bit allocator has variable storage, we need a way to get a + * a pointer to a bitalloc structure that points to the correct + * amount of storage. We do this by creating an array of + * bitalloc where the first element in the array is the + * actual bitalloc base structure, and the remaining elements + * in the array provide the storage for it. This approach allows + * instances to be individual variables or members of larger + * structures. + */ +#define BITALLOC_INST(name, size) \ + struct bitalloc name[(BITALLOC_SIZEOF(size) / \ + sizeof(struct bitalloc))] + +/* Symbolic return codes */ +#define BA_SUCCESS 0 +#define BA_FAIL -1 +#define BA_ENTRY_FREE 0 +#define BA_ENTRY_IN_USE 1 +#define BA_NO_ENTRY_FOUND -1 + +/** + * Initializates the bitallocator + * + * Returns 0 on success, -1 on failure. Size is arbitrary up to + * BITALLOC_MAX_SIZE + */ +int ba_init(struct bitalloc *pool, int size); + +/** + * Returns -1 on failure, or index of allocated entry + */ +int ba_alloc(struct bitalloc *pool); +int ba_alloc_index(struct bitalloc *pool, int index); + +/** + * Query a particular index in a pool to check if its in use. + * + * Returns -1 on invalid index, 1 if the index is allocated, 0 if it + * is free + */ +int ba_inuse(struct bitalloc *pool, int index); + +/** + * Variant of ba_inuse that frees the index if it is allocated, same + * return codes as ba_inuse + */ +int ba_inuse_free(struct bitalloc *pool, int index); + +/** + * Find next index that is in use, start checking at index 'idx' + * + * Returns next index that is in use on success, or + * -1 if no in use index is found + */ +int ba_find_next_inuse(struct bitalloc *pool, int idx); + +/** + * Variant of ba_find_next_inuse that also frees the next in use index, + * same return codes as ba_find_next_inuse + */ +int ba_find_next_inuse_free(struct bitalloc *pool, int idx); + +/** + * Multiple freeing of the same index has no negative side effects, + * but will return -1. returns -1 on failure, 0 on success. + */ +int ba_free(struct bitalloc *pool, int index); + +/** + * Returns the pool's free count + */ +int ba_free_count(struct bitalloc *pool); + +/** + * Returns the pool's in use count + */ +int ba_inuse_count(struct bitalloc *pool); + +#endif /* _BITALLOC_H_ */ diff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c index 6bafae5..3c5d55d 100644 --- a/drivers/net/bnxt/tf_core/tf_core.c +++ b/drivers/net/bnxt/tf_core/tf_core.c @@ -7,10 +7,18 @@ #include "tf_core.h" #include "tf_session.h" +#include "tf_rm.h" #include "tf_msg.h" #include "tfp.h" +#include "bitalloc.h" #include "bnxt.h" +static inline uint32_t SWAP_WORDS32(uint32_t val32) +{ + return (((val32 & 0x0000ffff) << 16) | + ((val32 & 0xffff0000) >> 16)); +} + int tf_open_session(struct tf *tfp, struct tf_open_session_parms *parms) @@ -141,5 +149,83 @@ tf_open_session(struct tf *tfp, return rc; cleanup_close: + tf_close_session(tfp); return -EINVAL; } + +int +tf_attach_session(struct tf *tfp __rte_unused, + struct tf_attach_session_parms *parms __rte_unused) +{ +#if (TF_SHARED == 1) + int rc; + + if (tfp == NULL) + return -EINVAL; + + /* - Open the shared memory for the attach_chan_name + * - Point to the shared session for this Device instance + * - Check that session is valid + * - Attach to the firmware so it can record there is more + * than one client of the session. + */ + + if (tfp->session) { + if (tfp->session->session_id.id != TF_SESSION_ID_INVALID) { + rc = tf_msg_session_attach(tfp, + parms->ctrl_chan_name, + parms->session_id); + } + } +#endif /* TF_SHARED */ + return -1; +} + +int +tf_close_session(struct tf *tfp) +{ + int rc; + int rc_close = 0; + struct tf_session *tfs; + union tf_session_id session_id; + + if (tfp == NULL || tfp->session == NULL) + return -EINVAL; + + tfs = (struct tf_session *)(tfp->session->core_data); + + if (tfs->session_id.id != TF_SESSION_ID_INVALID) { + rc = tf_msg_session_close(tfp); + if (rc) { + /* Log error */ + PMD_DRV_LOG(ERR, + "Message send failed, rc:%d\n", + rc); + } + + /* Update the ref_count */ + tfs->ref_count--; + } + + session_id = tfs->session_id; + + /* Final cleanup as we're last user of the session */ + if (tfs->ref_count == 0) { + tfp_free(tfp->session->core_data); + tfp_free(tfp->session); + tfp->session = NULL; + } + + PMD_DRV_LOG(INFO, + "Session closed, session_id:%d\n", + session_id.id); + + PMD_DRV_LOG(INFO, + "domain:%d, bus:%d, device:%d, fw_session_id:%d\n", + session_id.internal.domain, + session_id.internal.bus, + session_id.internal.device, + session_id.internal.fw_session_id); + + return rc_close; +} diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c index 2b68681..e05aea7 100644 --- a/drivers/net/bnxt/tf_core/tf_msg.c +++ b/drivers/net/bnxt/tf_core/tf_msg.c @@ -18,6 +18,82 @@ #include "hwrm_tf.h" /** + * Endian converts min and max values from the HW response to the query + */ +#define TF_HW_RESP_TO_QUERY(query, index, response, element) do { \ + (query)->hw_query[index].min = \ + tfp_le_to_cpu_16(response. element ## _min); \ + (query)->hw_query[index].max = \ + tfp_le_to_cpu_16(response. element ## _max); \ +} while (0) + +/** + * Endian converts the number of entries from the alloc to the request + */ +#define TF_HW_ALLOC_TO_REQ(alloc, index, request, element) \ + (request. num_ ## element = tfp_cpu_to_le_16((alloc)->hw_num[index])) + +/** + * Endian converts the start and stride value from the free to the request + */ +#define TF_HW_FREE_TO_REQ(hw_entry, index, request, element) do { \ + request.element ## _start = \ + tfp_cpu_to_le_16(hw_entry[index].start); \ + request.element ## _stride = \ + tfp_cpu_to_le_16(hw_entry[index].stride); \ +} while (0) + +/** + * Endian converts the start and stride from the HW response to the + * alloc + */ +#define TF_HW_RESP_TO_ALLOC(hw_entry, index, response, element) do { \ + hw_entry[index].start = \ + tfp_le_to_cpu_16(response.element ## _start); \ + hw_entry[index].stride = \ + tfp_le_to_cpu_16(response.element ## _stride); \ +} while (0) + +/** + * Endian converts min and max values from the SRAM response to the + * query + */ +#define TF_SRAM_RESP_TO_QUERY(query, index, response, element) do { \ + (query)->sram_query[index].min = \ + tfp_le_to_cpu_16(response.element ## _min); \ + (query)->sram_query[index].max = \ + tfp_le_to_cpu_16(response.element ## _max); \ +} while (0) + +/** + * Endian converts the number of entries from the action (alloc) to + * the request + */ +#define TF_SRAM_ALLOC_TO_REQ(action, index, request, element) \ + (request. num_ ## element = tfp_cpu_to_le_16((action)->sram_num[index])) + +/** + * Endian converts the start and stride value from the free to the request + */ +#define TF_SRAM_FREE_TO_REQ(sram_entry, index, request, element) do { \ + request.element ## _start = \ + tfp_cpu_to_le_16(sram_entry[index].start); \ + request.element ## _stride = \ + tfp_cpu_to_le_16(sram_entry[index].stride); \ +} while (0) + +/** + * Endian converts the start and stride from the HW response to the + * alloc + */ +#define TF_SRAM_RESP_TO_ALLOC(sram_entry, index, response, element) do { \ + sram_entry[index].start = \ + tfp_le_to_cpu_16(response.element ## _start); \ + sram_entry[index].stride = \ + tfp_le_to_cpu_16(response.element ## _stride); \ +} while (0) + +/** * Sends session open request to TF Firmware */ int @@ -51,6 +127,45 @@ tf_msg_session_open(struct tf *tfp, } /** + * Sends session attach request to TF Firmware + */ +int +tf_msg_session_attach(struct tf *tfp __rte_unused, + char *ctrl_chan_name __rte_unused, + uint8_t tf_fw_session_id __rte_unused) +{ + return -1; +} + +/** + * Sends session close request to TF Firmware + */ +int +tf_msg_session_close(struct tf *tfp) +{ + int rc; + struct hwrm_tf_session_close_input req = { 0 }; + struct hwrm_tf_session_close_output resp = { 0 }; + struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data); + struct tfp_send_msg_parms parms = { 0 }; + + /* Populate the request */ + req.fw_session_id = + tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id); + + parms.tf_type = HWRM_TF_SESSION_CLOSE; + parms.req_data = (uint32_t *)&req; + parms.req_size = sizeof(req); + parms.resp_data = (uint32_t *)&resp; + parms.resp_size = sizeof(resp); + parms.mailbox = TF_KONG_MB; + + rc = tfp_send_msg_direct(tfp, + &parms); + return rc; +} + +/** * Sends session query config request to TF Firmware */ int @@ -77,3 +192,289 @@ tf_msg_session_qcfg(struct tf *tfp) &parms); return rc; } + +/** + * Sends session HW resource query capability request to TF Firmware + */ +int +tf_msg_session_hw_resc_qcaps(struct tf *tfp, + enum tf_dir dir, + struct tf_rm_hw_query *query) +{ + int rc; + struct tfp_send_msg_parms parms = { 0 }; + struct tf_session_hw_resc_qcaps_input req = { 0 }; + struct tf_session_hw_resc_qcaps_output resp = { 0 }; + struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data); + + memset(query, 0, sizeof(*query)); + + /* Populate the request */ + req.fw_session_id = + tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id); + req.flags = tfp_cpu_to_le_16(dir); + + MSG_PREP(parms, + TF_KONG_MB, + HWRM_TF, + HWRM_TFT_SESSION_HW_RESC_QCAPS, + req, + resp); + + rc = tfp_send_msg_tunneled(tfp, &parms); + if (rc) + return rc; + + /* Process the response */ + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp, + l2_ctx_tcam_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_FUNC, resp, + prof_func); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_PROF_TCAM, resp, + prof_tcam_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_PROF_ID, resp, + em_prof_id); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EM_REC, resp, + em_record_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp, + wc_tcam_prof_id); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_WC_TCAM, resp, + wc_tcam_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_PROF, resp, + meter_profiles); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METER_INST, + resp, meter_inst); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_MIRROR, resp, + mirrors); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_UPAR, resp, + upar); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_SP_TCAM, resp, + sp_tcam_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_L2_FUNC, resp, + l2_func); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_FKB, resp, + flex_key_templ); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_TBL_SCOPE, resp, + tbl_scope); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH0, resp, + epoch0_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_EPOCH1, resp, + epoch1_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_METADATA, resp, + metadata); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_CT_STATE, resp, + ct_state); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_PROF, resp, + range_prof); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_RANGE_ENTRY, resp, + range_entries); + TF_HW_RESP_TO_QUERY(query, TF_RESC_TYPE_HW_LAG_ENTRY, resp, + lag_tbl_entries); + + return tfp_le_to_cpu_32(parms.tf_resp_code); +} + +/** + * Sends session HW resource allocation request to TF Firmware + */ +int +tf_msg_session_hw_resc_alloc(struct tf *tfp __rte_unused, + enum tf_dir dir, + struct tf_rm_hw_alloc *hw_alloc __rte_unused, + struct tf_rm_entry *hw_entry __rte_unused) +{ + int rc; + struct tfp_send_msg_parms parms = { 0 }; + struct tf_session_hw_resc_alloc_input req = { 0 }; + struct tf_session_hw_resc_alloc_output resp = { 0 }; + struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data); + + memset(hw_entry, 0, sizeof(*hw_entry)); + + /* Populate the request */ + req.fw_session_id = + tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id); + req.flags = tfp_cpu_to_le_16(dir); + + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req, + l2_ctx_tcam_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_FUNC, req, + prof_func_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_PROF_TCAM, req, + prof_tcam_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_PROF_ID, req, + em_prof_id); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EM_REC, req, + em_record_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req, + wc_tcam_prof_id); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_WC_TCAM, req, + wc_tcam_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_PROF, req, + meter_profiles); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METER_INST, req, + meter_inst); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_MIRROR, req, + mirrors); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_UPAR, req, + upar); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_SP_TCAM, req, + sp_tcam_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_L2_FUNC, req, + l2_func); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_FKB, req, + flex_key_templ); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_TBL_SCOPE, req, + tbl_scope); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH0, req, + epoch0_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_EPOCH1, req, + epoch1_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_METADATA, req, + metadata); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_CT_STATE, req, + ct_state); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_PROF, req, + range_prof); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_RANGE_ENTRY, req, + range_entries); + TF_HW_ALLOC_TO_REQ(hw_alloc, TF_RESC_TYPE_HW_LAG_ENTRY, req, + lag_tbl_entries); + + MSG_PREP(parms, + TF_KONG_MB, + HWRM_TF, + HWRM_TFT_SESSION_HW_RESC_ALLOC, + req, + resp); + + rc = tfp_send_msg_tunneled(tfp, &parms); + if (rc) + return rc; + + /* Process the response */ + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, resp, + l2_ctx_tcam_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, resp, + prof_func); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, resp, + prof_tcam_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, resp, + em_prof_id); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EM_REC, resp, + em_record_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, resp, + wc_tcam_prof_id); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, resp, + wc_tcam_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_PROF, resp, + meter_profiles); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METER_INST, resp, + meter_inst); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_MIRROR, resp, + mirrors); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_UPAR, resp, + upar); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, resp, + sp_tcam_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, resp, + l2_func); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_FKB, resp, + flex_key_templ); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, resp, + tbl_scope); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH0, resp, + epoch0_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_EPOCH1, resp, + epoch1_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_METADATA, resp, + metadata); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_CT_STATE, resp, + ct_state); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, resp, + range_prof); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, resp, + range_entries); + TF_HW_RESP_TO_ALLOC(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, resp, + lag_tbl_entries); + + return tfp_le_to_cpu_32(parms.tf_resp_code); +} + +/** + * Sends session HW resource free request to TF Firmware + */ +int +tf_msg_session_hw_resc_free(struct tf *tfp, + enum tf_dir dir, + struct tf_rm_entry *hw_entry) +{ + int rc; + struct tfp_send_msg_parms parms = { 0 }; + struct tf_session_hw_resc_free_input req = { 0 }; + struct tf_session *tfs = (struct tf_session *)(tfp->session->core_data); + + memset(hw_entry, 0, sizeof(*hw_entry)); + + /* Populate the request */ + req.fw_session_id = + tfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id); + req.flags = tfp_cpu_to_le_16(dir); + + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_CTXT_TCAM, req, + l2_ctx_tcam_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_FUNC, req, + prof_func); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_PROF_TCAM, req, + prof_tcam_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_PROF_ID, req, + em_prof_id); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EM_REC, req, + em_record_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM_PROF_ID, req, + wc_tcam_prof_id); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_WC_TCAM, req, + wc_tcam_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_PROF, req, + meter_profiles); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METER_INST, req, + meter_inst); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_MIRROR, req, + mirrors); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_UPAR, req, + upar); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_SP_TCAM, req, + sp_tcam_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_L2_FUNC, req, + l2_func); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_FKB, req, + flex_key_templ); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_TBL_SCOPE, req, + tbl_scope); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH0, req, + epoch0_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_EPOCH1, req, + epoch1_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_METADATA, req, + metadata); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_CT_STATE, req, + ct_state); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_PROF, req, + range_prof); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_RANGE_ENTRY, req, + range_entries); + TF_HW_FREE_TO_REQ(hw_entry, TF_RESC_TYPE_HW_LAG_ENTRY, req, + lag_tbl_entries); + + MSG_PREP_NO_RESP(parms, + TF_KONG_MB, + HWRM_TF, + HWRM_TFT_SESSION_HW_RESC_FREE, + req); + + rc = tfp_send_msg_tunneled(tfp, &parms); + if (rc) + return rc; + + return tfp_le_to_cpu_32(parms.tf_resp_code); +} diff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h index 20ebf2e..da5ccf3 100644 --- a/drivers/net/bnxt/tf_core/tf_msg.h +++ b/drivers/net/bnxt/tf_core/tf_msg.h @@ -30,6 +30,34 @@ int tf_msg_session_open(struct tf *tfp, uint8_t *fw_session_id); /** + * Sends session close request to Firmware + * + * [in] session + * Pointer to session handle + * + * [in] fw_session_id + * Pointer to the fw_session_id that is assigned to the session at + * time of session open + * + * Returns: + * + */ +int tf_msg_session_attach(struct tf *tfp, + char *ctrl_channel_name, + uint8_t tf_fw_session_id); + +/** + * Sends session close request to Firmware + * + * [in] session + * Pointer to session handle + * + * Returns: + * + */ +int tf_msg_session_close(struct tf *tfp); + +/** * Sends session query config request to TF Firmware */ int tf_msg_session_qcfg(struct tf *tfp); @@ -41,4 +69,18 @@ int tf_msg_session_hw_resc_qcaps(struct tf *tfp, enum tf_dir dir, struct tf_rm_hw_query *hw_query); +/** + * Sends session HW resource allocation request to TF Firmware + */ +int tf_msg_session_hw_resc_alloc(struct tf *tfp, + enum tf_dir dir, + struct tf_rm_hw_alloc *hw_alloc, + struct tf_rm_entry *hw_entry); + +/** + * Sends session HW resource free request to TF Firmware + */ +int tf_msg_session_hw_resc_free(struct tf *tfp, + enum tf_dir dir, + struct tf_rm_entry *hw_entry); #endif /* _TF_MSG_H_ */ diff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h index 160abac..8dbb2f9 100644 --- a/drivers/net/bnxt/tf_core/tf_resources.h +++ b/drivers/net/bnxt/tf_core/tf_resources.h @@ -6,11 +6,6 @@ #ifndef _TF_RESOURCES_H_ #define _TF_RESOURCES_H_ -/* - * Hardware specific MAX values - * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI - */ - /** HW Resource types */ enum tf_resource_type_hw { @@ -43,4 +38,23 @@ enum tf_resource_type_hw { TF_RESC_TYPE_HW_LAG_ENTRY, TF_RESC_TYPE_HW_MAX }; + +/** HW Resource types + */ +enum tf_resource_type_sram { + TF_RESC_TYPE_SRAM_FULL_ACTION, + TF_RESC_TYPE_SRAM_MCG, + TF_RESC_TYPE_SRAM_ENCAP_8B, + TF_RESC_TYPE_SRAM_ENCAP_16B, + TF_RESC_TYPE_SRAM_ENCAP_64B, + TF_RESC_TYPE_SRAM_SP_SMAC, + TF_RESC_TYPE_SRAM_SP_SMAC_IPV4, + TF_RESC_TYPE_SRAM_SP_SMAC_IPV6, + TF_RESC_TYPE_SRAM_COUNTER_64B, + TF_RESC_TYPE_SRAM_NAT_SPORT, + TF_RESC_TYPE_SRAM_NAT_DPORT, + TF_RESC_TYPE_SRAM_NAT_S_IPV4, + TF_RESC_TYPE_SRAM_NAT_D_IPV4, + TF_RESC_TYPE_SRAM_MAX +}; #endif /* _TF_RESOURCES_H_ */ diff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h index 5164d6b..57ce19b 100644 --- a/drivers/net/bnxt/tf_core/tf_rm.h +++ b/drivers/net/bnxt/tf_core/tf_rm.h @@ -8,10 +8,52 @@ #include "tf_resources.h" #include "tf_core.h" +#include "bitalloc.h" struct tf; struct tf_session; +/* Internal macro to determine appropriate allocation pools based on + * DIRECTION parm, also performs error checking for DIRECTION parm. The + * SESSION_POOL and SESSION pointers are set appropriately upon + * successful return (the GLOBAL_POOL is used to globally manage + * resource allocation and the SESSION_POOL is used to track the + * resources that have been allocated to the session) + * + * parameters: + * struct tfp *tfp + * enum tf_dir direction + * struct bitalloc **session_pool + * string base_pool_name - used to form pointers to the + * appropriate bit allocation + * pools, both directions of the + * session pools must have same + * base name, for example if + * POOL_NAME is feat_pool: - the + * ptr's to the session pools + * are feat_pool_rx feat_pool_tx + * + * int rc - return code + * 0 - Success + * -1 - invalid DIRECTION parm + */ +#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \ + (rc) = 0; \ + if ((direction) == TF_DIR_RX) { \ + *(session_pool) = (tfs)->pool_name ## _RX; \ + } else if ((direction) == TF_DIR_TX) { \ + *(session_pool) = (tfs)->pool_name ## _TX; \ + } else { \ + rc = -1; \ + } \ + } while (0) + +#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name) \ + (*(session_pool) = (tfs)->pool_name ## _RX) + +#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name) \ + (*(session_pool) = (tfs)->pool_name ## _TX) + /** * Resource query single entry */ @@ -23,6 +65,16 @@ struct tf_rm_query_entry { }; /** + * Resource single entry + */ +struct tf_rm_entry { + /** Starting index of the allocated resource */ + uint16_t start; + /** Number of allocated elements */ + uint16_t stride; +}; + +/** * Resource query array of HW entities */ struct tf_rm_hw_query { @@ -30,4 +82,65 @@ struct tf_rm_hw_query { struct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX]; }; +/** + * Resource allocation array of HW entities + */ +struct tf_rm_hw_alloc { + /** array of HW resource entries */ + uint16_t hw_num[TF_RESC_TYPE_HW_MAX]; +}; + +/** + * Resource query array of SRAM entities + */ +struct tf_rm_sram_query { + /** array of SRAM resource entries */ + struct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX]; +}; + +/** + * Resource allocation array of SRAM entities + */ +struct tf_rm_sram_alloc { + /** array of SRAM resource entries */ + uint16_t sram_num[TF_RESC_TYPE_SRAM_MAX]; +}; + +/** + * Initializes the Resource Manager and the associated database + * entries for HW and SRAM resources. Must be called before any other + * Resource Manager functions. + * + * [in] tfp + * Pointer to TF handle + */ +void tf_rm_init(struct tf *tfp); + +/** + * Allocates and validates both HW and SRAM resources per the NVM + * configuration. If any allocation fails all resources for the + * session is deallocated. + * + * [in] tfp + * Pointer to TF handle + * + * Returns + * - (0) if successful. + * - (-EINVAL) on failure. + */ +int tf_rm_allocate_validate(struct tf *tfp); + +/** + * Closes the Resource Manager and frees all allocated resources per + * the associated database. + * + * [in] tfp + * Pointer to TF handle + * + * Returns + * - (0) if successful. + * - (-EINVAL) on failure. + * - (-ENOTEMPTY) if resources are not cleaned up before close + */ +int tf_rm_close(struct tf *tfp); #endif /* TF_RM_H_ */ diff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h index 32e53c0..651d3ee 100644 --- a/drivers/net/bnxt/tf_core/tf_session.h +++ b/drivers/net/bnxt/tf_core/tf_session.h @@ -9,6 +9,7 @@ #include #include +#include "bitalloc.h" #include "tf_core.h" #include "tf_rm.h" -- 2.7.4